-
Notifications
You must be signed in to change notification settings - Fork 23
[v3] Refactor to use packet metadata instead of overloaded receiver #36
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple questions but otherwise this looks good!
…op. Fix existing tests. Add multihop test
b245ae1
to
e0b798f
Compare
…y acks back to source chain.
…handling a forwarded packet response to stop refund processing.
84141e8
to
5446f61
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handling the acks async really cleans this up nicely! Well done
@@ -72,12 +98,36 @@ func (k Keeper) Logger(ctx sdk.Context) log.Logger { | |||
return ctx.Logger().With("module", "x/"+host.ModuleName+"-"+types.ModuleName) | |||
} | |||
|
|||
func (k Keeper) WriteAcknowledgementForForwardedPacket( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what gives us the ability to do async ack handling right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes exactly. For the multi-hop refund cases, we avoid a local chain refund on the intermediate chains by calling this and NOT calling OnAcknowledgementPacket
/OnTimeoutPacket
which would issue a refund. This makes sure the funds burned/escrowed on the forward remain in that state so that funds are not duplicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After this comment, I realized that we should:
- burn the escrowed funds on refund if the chain is the denom source for the forward channel but not the return channel
- transfer the funds from the forward channel escrow account to the return channel escrow account if the chain is the denom source for the forward channel and the return channel (denom is native denom on the chain)
With these updates, the escrow account balances are the same as if all of the transfers happened independently without the forward middleware.
I added escrow account assertions to the end to end tests and added a test to exercise both the escrow burn and escrow transfer: https://github.com/strangelove-ventures/ibctest/blob/393a8bc46994f5606e9f2fc669c7f6efe02eddf5/examples/ibc/packet_forward_test.go#L604-L616
} else { | ||
retries = am.retriesOnTimeout | ||
} | ||
var timeout time.Duration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could you name this timeoutDuration or relativeTimeout so it doesn't get confused with the absolute timeout set in the packet
"dst-channel", packet.DestinationChannel, "dst-port", packet.DestinationPort, | ||
"error", err, | ||
) | ||
return am.app.OnAcknowledgementPacket(ctx, packet, acknowledgement, relayer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you calling the underlying app here, instead of returning error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did this in order to avoid making assumptions on the expected packet data. In the case a middleware further down the stack is expecting a packet that is not transfertypes.FungibleTokenPacketData
, this will allow it to fall through.
"amount", data.Amount, "denom", data.Denom, | ||
) | ||
|
||
inFlightPacket, err := am.keeper.TimeoutShouldRetry(ctx, packet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deferring to the intended users here.
But it seems to me that this may have been requested before async acks were used? Now with async acks, multihop is atomic. It either reaches the final destination or it ends back at sender chain. So i don't see a need to implement retries directly in-protocol?
I think its cleaner to have this base layer in the state machine just retry once, and if it fails, then the retry logic can be implemented by callers on the sender chain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The retry on timeout logic is more useful after point-of-no-return actions, such as swap before a forward. Since we cannot refund back to the sender chain after a successful swap, it is helpful to retry on timeout for the forwards after the swap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting... it seems to me like doing something complicated in the middle is a bit dangerous with async acks.
Suppose you swap in the middle and then it times out at the destination (let's suppose it does for all of your tries).
If the error ack is sent, then the tokens are refunded in their original denom. Even though the original denom funds on the router chain got put into a DEX and were swapped out for something new. What would happen now to those swapped funds?
The scheme is fine in the case where the chains in the middle are acting as pure pass-throughs. I'm worried about the potential issues that would arise if the chains in the middle are doing something more complicated.
Is this an expected use case people need? Is it fine to just swap at the final destination chain for most use cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unfamiliar with async acks (where can I find information about this?), but I agree with Aditya that doing something complicated in the middle breaks the composability of packet forward.
I think it should be the responsibility of whoever implements the functionality in the middle to determine what happens to the in-transit tokens after a failure and act accordingly. This is equivalent to only allowing actions (i.e.: other middlewares) in sink zones. I'm not sure if we can enforce this (maybe it can be done by forcing forwarded packets to go straight to the channel?), but I think documenting it should be enough since anybody implementing other actions in the middle already has to think about these scenarios.
Is the idea of retries after point-of-no-return just to reduce the likelihood of having to end up in this case (due to congestion or temporary liveness issues, for example)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the error ack is sent, then the tokens are refunded in their original denom. Even though the original denom funds on the router chain got put into a DEX and were swapped out for something new. What would happen now to those swapped funds?
For the swap and forward case, the atomicity from the sender will only be up to the swap. A successful ack will be proxied back in the case of a successful swap, regardless of the success/fail of the forward after the swap. This will be necessary with or without timeout on retry so that refunds don't happen after a successful swap.
Is this an expected use case people need? Is it fine to just swap at the final destination chain for most use cases?
The primary use case of swap and forward will be a swap on chain B in an A->B->C transfer. E.g. start with native denom on A, swap ibc denom A for ibc denom C on chain B dex, then forward to C to result in native denom C on chain C. This can all occur from a single user initiated transfer on A.
I'm unfamiliar with async acks (where can I find information about this?)
There are a few diagrams in the PR description with numbers to indicate the steps of the packet flow, showing the acks being held until the response of the ack of the forwarded packet.
Is the idea of retries after point-of-no-return just to reduce the likelihood of having to end up in this case (due to congestion or temporary liveness issues, for example)?
The retries can be configured for any of the hops, not just after a point of no return. The chain sets the defaults, they are optional to be included in the packet memo (could be 0 for no retries). But yes exactly, it helps for relayer liveness issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A successful ack will be proxied back in the case of a successful swap, regardless of the success/fail of the forward after the swap. This will be necessary with or without timeout on retry so that refunds don't happen after a successful swap.
To clarify this: we can still retain atomic behavior for swap and forward. In the swap middleware, we can hold the ack until the transfer after the swap returns an ack. Then, regardless if there is success/error in the forward(s) after the swap, we send back a successful ack, but it can contain a specific error message for failures after the swap. That way there is no risk of funds being refunded on prior chains after the swap is successful and the user can still poll for the ack on the first chain and check the ack message for success/fail of the whole operation. For example, if the A->B->C example above has a successful swap on B but has a failure on the forward B to C, you would get a successful ack on A with message, "Failed to forward to chain C after max retries after a successful swap on B. Funds are now in denom CB on chain B"
Ahh, that makes sense. I thought "async acks" was an ibc-go feature I didn't know about.
The way we're planning to implement this on Osmosis is that a swap happens in the final destination (osmosis) and then we initiate a new transfer after the swap happens. We would be responsible for making sure the right amount of tokens are returned if that last transfer fails. So even if the swap happens on chain B in your example, that's not a case of swap-and-forward from the pov of this middleware. Ending the ibc transaction with a success Ack when doing the swap has the disadvantage that the sender will receive a success and then have to wait for a second IBC transaction to receive the assets, but I don't think we can recover from a failure after the swap without being able to override the behaviour of |
Can you add docs / examples on how to use this new format? iiuc you add a string to the memo field of the IBC packet with the following info: {
"receiver": "osmo1asdfasdfasdfasdfasdfasdfasdf",
"port": "transfer",
"channel": "channel-0",
"timeout": 0,
"retries": 0,
"next": null
} where |
does it need to be escaped?
|
Agreed, if you are going to handle the transfer logic after the swap within wasm, then the forward middleware will not be used on the osmosis chain for the A->B->C example. For an A->B(osmosis)->C->D example, the packet forward middleware could still be used on chain C though. There might be a good way on osmosis to not have to re-write the forward logic by executing the swap contract on the VM then falling through to the packet forward middleware based on the success/fail of the contract execution I think though. For other dexs that use an sdk module for handling the swap, falling through to the packet-forward-middleware after a successful swap is more straightforward. We have an implementation of the swap middleware that uses this approach.
|
Yes definitely, we can get this added. A minimal example would look like this:
The root
The entire string must be valid json, so yes, the |
Is the swap middleware just calling the exported functions of this middleware ( Happy to jump on a call and brainstorm ideas around this |
Pulls in ibc-go v3.4.0, which has the backported
FungibleTokenPacketData
memo string and also the packet sequence in theMsgTransferResponse
, two pieces which are necessary to have a clean implementation of the forward middleware.This removes the need to overload the receiver field with delimited data and updates to use the memo instead. The receiver field is reverted to typical usage of the next receiver address string.
This effort aims to standardize usage of the packet memo for multi-hop IBC interactions. The format proposed here is JSON with root keys that indicate the intended consumer of the data (middleware or otherwise). Special thanks to Dev Ojha and Nicolas Lara from the Osmosis team for collaborating on this schema.
For example, to enqueue a swap action that should be handled by a swap middleware, rather than
forward
as the key, a root key such asswap
orwasm
can be used. To be clear, it is not the intention of the packet forward middleware to support other memo root keys, but chained middleware is compatible with this JSON structure. The forward middleware will ignore the memo unless there is aforward
key, and other middleware can behave similarly for their relevant key.For multi-hop actions, a
next
property is included in the object for the relevant root key to specify the next action. The intention is that the consumer will handle what it needs using the data from the object, then pass thenext
object for the next consumer to handle, e.g. pass as the next packet memo. In this use case, theforward
root key is used to specify that the forward middleware should handle this packet. Multiple forwards can be enqueued by nestingnext: { forward: {...}}
.Example happy path multi-hop A->B->C->D
Example: multi-hop A->B->C->D, C->D recv_packet error, refund back to A
Example: forward A->B->C with 1 retry, max timeouts occurs, refund back to A