-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address outstanding 2077 feedback #2382
Conversation
Codecov ReportPatch coverage:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## main #2382 +/- ##
==========================================
- Coverage 90.27% 90.26% -0.01%
==========================================
Files 106 106
Lines 55691 55747 +56
Branches 55691 55747 +56
==========================================
+ Hits 50273 50321 +48
- Misses 5418 5426 +8
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
This is not the case -- we currently only have such disconnect logic on |
Thanks for pointing that out. I believe it does make sense to keep that disconnect logic in one place. And to me it also makes sense that we disconnect peers if they're not making progress on the prefunded parts of channel establishment. Even in the case where we already have funded channels with the peer it's probably best to still disconnect if they're not making progress establishing a new one, although I could be wrong. |
Rebasing & adding disconnect logic. |
52075e7
to
a2bfb87
Compare
LGTM after squash |
Rebased and squashed with:
|
The |
Just changed some occurrences of "prefunded" to "unfunded" in first commit. EDIT: Ugh. Missed the actual ones Jeff pointed out. Fixed now. |
FWIW, regarding the commit message and change, I'm indifferent to use of "pending". I primarily thought "prefunded" wasn't accurate. |
@wpaulino pointed out that pending is a little ambiguous, which I agree with as it might refer to "pending confirmation". So "unfunded" is the best short name we're going to get :) |
We had some inconsistencies so far in referring to channels such as `OutboundV1Channel` and `InboundV1Channel` as pending and unfunded. From here we refer to these kinds of channels only as "unfunded". This is a slight conflation with the term "unfunded" in the contexts of denial of service mitigation. There, "unfunded" actually refers to non-0conf, inbound channels that have not had their funding transaction confirmed. This might warrant changing that usage to "unconfirmed inbound".
One of a series of follow-up commits to address some issues found in PR 2077, where we split channels up into different maps and structs depending on phase in their life.
…an an hour We introduce a `UnfundedChannelContext` which contains a counter for the current age of an unfunded channel in timer ticks. This age is incremented for every `ChannelManager::timer_tick_ocurred` and the unfunded channel is removed if it exceeds `UNFUNDED_CHANNEL_AGE_LIMIT_TICKS`. The value will not be persisted as unfunded channels themselves are not persisted.
Rebased and changed the one occurrence of "pending" to "unfunded" that Jeff pointed out. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#2077 (comment) and #2077 (comment) should still be addressed, but certainly aren't release blocking or blocking this PR.
@@ -7226,37 +7295,20 @@ where | |||
log_debug!(self.logger, "Generating channel_reestablish events for {}", log_pubkey!(counterparty_node_id)); | |||
|
|||
let per_peer_state = self.per_peer_state.read().unwrap(); | |||
for (_cp_id, peer_state_mutex) in per_peer_state.iter() { | |||
if let Some(peer_state_mutex) = per_peer_state.get(counterparty_node_id) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤦 Oh lol I missed we were changing this to no longer iterate, which is actually what introduces #2418. That's okay, I think we should stop iterating all our peers when any peer connects, and should move the rebroadcast code elsewhere anyway.
peer_state.channel_by_id.retain(|_, chan| { | ||
let retain = if chan.context.get_counterparty_node_id() == *counterparty_node_id { | ||
if !chan.context.have_received_message() { | ||
// If we created this (outbound) channel while we were disconnected from the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm missing where we discussed this no longer being true - while its not a big deal cause they'll eventually time out anyway, we should still be clearing the inbound/outbound sets on peer connection, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can’t think of a situation where inbound / outbound sets wouldn’t be empty upon reconnect. We clear them on disconnect and they’re never persisted so aren’t recoverable after a crash. Or I might be missing a case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't currently check if the peer is connected in create_channel
, so will happily create a channel, fail to send an OpenChannel
, and let it sit around waiting for the peer to connect. At the time we didn't track if peers were connected in ChannelManager
, but now that we do we should really fix that issue by failing early in create_channel
, rather than leaning on peer_connected
to miraculously fail the channel. We should do that later, though.
Resolves #2360.