Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter LegacyWaitForFundingConfirmed on startup #549

Merged
merged 1 commit into from
Oct 5, 2023

Conversation

pm47
Copy link
Member

@pm47 pm47 commented Oct 4, 2023

There are two reasons why a channel would be in this state:
a) The peer was shut down during channel creation. The funding tx got published but the peer hasn't been started since.
b) The channel creation was actually aborted, and not only will the funding tx never be published, the channel was forgotten altogether by the counterparty. In this case, the channel stays permanently in the state Syncing(LegacyWaitForFundingConfirmed) because the counterparty never sends channel_reestablish and lightning-kmp will not send his when the remote backup is enabled.

There are some users in the b) case, it caused them to be stuck during migration (because we were checking that all states were either synced or closed). We published a patch, but then the same users are stuck with the swap-in, because we do the same kind of check.

I fail to see why we even stored channels in that state in the first place: either the channel was actually created and it would be restored upon reconnection, or the channel creation was aborted and it's better to forget about it.

There are two reasons why a channel would be in this state:
a) The peer was shut down during channel creation. The funding tx got
published but the peer hasn't been started since.
b) The channel creation was actually aborted, and not only will the
funding tx never be published, the channel was forgotten altogether by
the counterparty. In this case, the channel stays permanently in the
state `Syncing(LegacyWaitForFundingConfirmed)` because the counterparty
never sends `channel_reestablish` and lightning-kmp will not send his
when the remote backup is enabled.

There are some users in the b) case, it caused them to be stuck during
migration (because we were checking that all states were either synced
or closed). We published a path, but then the same users are stuck with
the swap-in, because we do the same kind of check.

I fail to see why we even stored channels in that state in the first
place: either the channel was actually created and it would be restored
upon reconnection, or the channel creation was aborted and it's better
to forget about it.
@pm47 pm47 requested a review from t-bast October 4, 2023 16:06
Copy link
Member

@t-bast t-bast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, so if the user is in the a) case, now they won't initially restore that channel but will receive a channel_reestablish with a backup that will let them restore it properly. If they're in the b) case, it won't be restored and this ok. Did I understand this correctly? If so, it's probably worth adding a comment in the code to explain that.

I see a few issues with that approach, but I don't know how impactful they are:

  • if there's a confirmed swap-in transaction and a channel in the a) case, the user may send please_open_channel instead of splicing into the LegacyWaitForFundingConfirmed channel. But that's ok, it should be rejected by our node (since we disallow multiple channels) and can be retried after the channel is restored
  • does this negatively impact the migration process? Will users with channels in the a) case correctly mutual close them to the swap-in address?

@pm47
Copy link
Member Author

pm47 commented Oct 5, 2023

Did I understand this correctly?

Yes

if there's a confirmed swap-in transaction and a channel in the a) case, the user may send please_open_channel instead of splicing into the LegacyWaitForFundingConfirmed channel. But that's ok, it should be rejected by our node (since we disallow multiple channels) and can be retried after the channel is restored

We don't actively disallow multiple channels AFAICT (I can't see where we would do that). In this scenario we would create a new dual-funding channel and use it for all future swaps and pay-to-open. We do actually, and I agree with your assessment.

does this negatively impact the migration process?

The migration process already ignores channels in this state (this is the patch that I was referring to in the description). Actually this is how we ended up with a migrated wallet that had zombie LegacyWaitForFundingConfirmed blocking the swap-ins. Previously we would just ask users to reset their data.

Note that if we were able to provide the list of known channels in our init, then phoenix would be able to clean up by itself, and would be able to tell with certainty when all its channels are synced.

Will users with channels in the a) case correctly mutual close them to the swap-in address?

No, those channels would still be restored on startup, and presumably (since they are real) would quickly move to the Normal state. We would end up in a scenario where users has several channels in Normal state. That is something that can happen in the general case and we already had plans to offer a "consolidate" feature to deal with that.

@pm47 pm47 merged commit 9d0c10b into master Oct 5, 2023
@pm47 pm47 deleted the clean-up-legacy-wait-funding-confirmed branch October 5, 2023 12:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants