-
Notifications
You must be signed in to change notification settings - Fork 376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically Update fees on outbound channels #985
Automatically Update fees on outbound channels #985
Conversation
91f8a1d
to
09454c5
Compare
Codecov Report
@@ Coverage Diff @@
## main #985 +/- ##
========================================
Coverage 90.94% 90.95%
========================================
Files 63 64 +1
Lines 32150 32393 +243
========================================
+ Hits 29240 29463 +223
- Misses 2910 2930 +20
Continue to review full report at Codecov.
|
bf62104
to
a0f981c
Compare
I'm rewriting the outbound update_fee sending state tracking. |
a826a80
to
884925d
Compare
Modulo some missing tests this should be good to go. |
0c4bc18
to
8de0043
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed until 4666184
lightning/src/ln/channelmanager.rs
Outdated
if chan.get_feerate() < new_feerate || chan.get_feerate() > new_feerate * 2 { | ||
log_trace!(self.logger, "Channel {} qualifies for a feerate change from {} to {}. Checking if its live ({}).", | ||
log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate, chan.is_live()); | ||
if chan.is_live() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about logging the channels non update-fee bumped with their currently committed feerate ?
As is_live()
encompass peer disconnection, if this state persists and pre-signed feerate starts to sink deeper w.r.t to network mempools feerate groups, a node operator might decide to preemptively close them ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused, the above log does always log if a channel qualifies for a feerate update, even if the peer is disconnected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you right! And that's should be a case covered by #993, good here.
lightning/src/ln/channelmanager.rs
Outdated
@@ -3843,6 +3863,33 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> MessageSend | |||
result = NotifyOption::DoPersist; | |||
} | |||
|
|||
if self.startup_complete.load(Ordering::Relaxed) != 0 { | |||
let new_feerate = self.fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::Normal); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm should we use ConfirmationTarget::HighPriority
by default ?
Are we operating under the assumptions that a) our counterparty can go offline at anytime and b) mempool can spikes making the obsolete the pre-signed feerate to pass mempools min feerate ?
If we want to avoid feerate overhead in case of unilateral closure, and the counterparty is online, we can still update_fee
adjust-in-time before to enforce the closing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose if we broadcasted our commitment transaction quickly after our counterparty went offline (which we won't but assuming we did), then we could use Normal and its ok. I think we should use Normal
until we have anchor, then switch anchor to Normal
(or maybe Background * 2
?) then switch non-anchor to something higher? The issue is really that HighPriority
may just be a very high fee and users would complain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well this feerate should only play out in case of non-cooperative closure, which should be the minority of all channel closures so intuitively I would say we can be generous with it and user shouldn't complain ?
Though i concede you can still have a lot of closures for liquidity operations/channel management tools, and you would like to be conservative....
W.r.t to anchor support, i think we could extend our FeeEstimator
API to ask for mempool min feerate ? And I can extend Core's estimaterawfee
to return this value, pending for package relay deployment ? And yes switch legacy fee-bumping to a default higher feerate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
W.r.t to anchor support, i think we could extend our FeeEstimator API to ask for mempool min feerate ?
Yea, this is a good question. I'm not sure what the right answer is, sadly most fee estimators have no concept of "mempool min fee".
lightning/src/ln/channelmanager.rs
Outdated
log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate, chan.is_live()); | ||
if chan.is_live() { | ||
should_persist = NotifyOption::DoPersist; | ||
let res = match chan.send_update_fee_and_commit(new_feerate, &self.logger) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we check that the proposed update_fee (or received!) is at least equal or superior to mempool min relay fee ? The check enforced here : https://github.com/bitcoin/bitcoin/blob/531c2b7c04898f5a2097f44e8c12bfb2f53aaf9b/src/validation.cpp#L519
I don't see it in Channel::check_remote_fee
or send_update_fee
. We have a worst-case check on the receiver-side on ConfirmationTarget::Background
, but not on the sender. And we might also have it as a belt-and-suspender in case of buggy/compromised fee-estimators ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, true, I think we need to apply that everywhere we call get_est_sat_per_1000_weight
, though, not just here. Maybe we can open an issue and add assertions everywhere we call it later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tracked : #1016
lightning/src/ln/channel.rs
Outdated
@@ -301,6 +301,21 @@ pub struct CounterpartyForwardingInfo { | |||
pub cltv_expiry_delta: u16, | |||
} | |||
|
|||
/// When a node increases the feerate on an outbound channel, it can cause the channel to get |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "Updating fee and paying for them is the responsibility is on the channel initiator in the update_fee
fee-bumping schemes. When it does increase the feerate on an outbound channel", good to quickly recall on who is the fee-burden
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rewrote it a chunk.
/// balance is right at or near the channel reserve, neither side will be able to send an HTLC. | ||
/// Thus, before sending an HTLC when we are the initiator, we check that the feerate can increase | ||
/// by this multiple without hitting this case, before sending. | ||
/// This multiple is effectively the maximum feerate "jump" we expect until more HTLCs flow over |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the issue solved by lightning/bolts#740 right ?
IIRC, the new reserve is to "cover" the cost of new HTLC flowing, though it can also serve as a update_fee
reserve but maybe in that case we should bump it a bit more compared to spec recommendations ?
Odds of channel stuck are going to decrease with only the commitment transaction size decreasing or the initiator balance increasing (though not in case of mempools emptying as we base the upper bound on CHAN_STUCK_FEE_INCREASE_MULTIPLE * 2 in the newer ChannelManager::update_channel_fee
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's the relevant issue. Note that there is no way to "properly" protect ourselves because its not enough to just bound single fee increases by some constant, we have to bound the total fee increase between now and when we next send an HTLC to the constant. I agree we should increase this beyond 2x, but its out of scope of this PR and requires fixing a number of tests.
899d282
to
fd5d84b
Compare
Added a testt for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed until fd5d84b, overall SGTM though I would like to have another look on InboundFeeUpdateState
correctness.
And good to have another pair of eyes.
lightning/src/ln/channel.rs
Outdated
@@ -676,7 +676,7 @@ impl<Signer: Sign> Channel<Signer> { | |||
if feerate_per_kw < lower_limit { | |||
return Err(ChannelError::Close(format!("Peer's feerate much too low. Actual: {}. Our expected lower limit: {}", feerate_per_kw, lower_limit))); | |||
} | |||
let upper_limit = fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::HighPriority) as u64 * 2; | |||
let upper_limit = fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::HighPriority) as u64 * 4; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can even give more leeway to our users but hardcode a worst top mempool feerate observed on the last 2 or 4 years ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't think we want to let it get too crazy. Correct me if I'm wrong, but I think the channel funder (who is a miner) can burn their counterparty money if they can set the fee very high, and then cause their counterparty's balance to just be dust?
fd5d84b
to
8f50978
Compare
8f50978
to
2d51b45
Compare
2d51b45
to
691b865
Compare
Rebased on upstream, squashing fixup commits to make the rebase cleaner. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ACK mod Jeff's comments that were left on #1011
3f66c19
to
8b795af
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK mod the one comment + squash
8b795af
to
dc1e110
Compare
Dropped the feerate update in |
dc1e110
to
993348b
Compare
91c35d7
to
1f6e46a
Compare
Previously we'd been expecting to implement anchor outputs before shipping 0.1, thus reworking our channel fee update process entirely and leaving it as a future task. However, due to the difficulty of working with on-chain anchor pools, we are now likely to ship 0.1 without requiring anchor outputs. In either case, there isn't a lot of reason to require that users call an explicit "prevailing feerates have changed" function now that we have a timer method which is called regularly. Further, we really should be the ones deciding on the channel feerate in terms of the users' FeeEstimator, instead of requiring users implement a second fee-providing interface by calling an update_fee method. Finally, there is no reason for an update_fee method to be channel-specific, as we should be updating all (outbound) channel fees at once. Thus, we move the update_fee handling to the background, calling it on the regular 1-minute timer. We also update the regular 1-minute timer to fire on startup as well as every minute to ensure we get fee updates even on mobile clients that are rarely, if ever, open for more than one minute.
When we send an update_fee to our counterparty on an outbound channel, if we need to re-send a commitment update after reconnection, the update_fee must be present in the re-sent commitment update messages. However, wewere always setting the update_fee field in the commitment update to None, causing us to generate invalid commitment signatures and get channel force-closures. This fixes the issue by correctly detecting when an update_fee needs to be re-sent, doing so when required.
If we receive an update_fee but do not receive a commitment_signed, we should not persist the pending fee update to disk or hold on to it after our peer disconnects. In order to make the code the most readable, we add a state enum which matches the relevant states from InboundHTLCState, allowing for more simple code comparison between inbound HTLC handling and update_fee handling.
Inbound fee udpates are rather broken in lightning as they can impact the non-fundee despite the funder paying the fee, but only in the dust exposure it places on the fundee. At least lnd is fairly aggressively high in their (non-anchor) fee estimation, running the risk of force-closure. Further, because we relied on a fee estimator we don't have full control over, we were assuming our users' fees are particularly conservative, and thus were at a lot of risk to force-closures. This converts our fee limiting to use an absurd upper bound, focusing on whether we are over-exposed to in-flight dust when we receive an update_fee.
1f6e46a
to
d3af49e
Compare
Squashed fixup commits. I had missed a few of Jeff's comments from yesterday on #1011 which applied here, so included those as well. Diff from Val's ack follows, ignoring a minor reversion of a change to full_stack_target.rs which impacts the hard-coded test and thus is rather long byte-size.
|
Will merge after CI. For the record, full diff since Arik's ACK:
|
self.pending_update_fee = None; | ||
} | ||
if let &mut Some((_, ref mut update_state)) = &mut self.pending_update_fee { | ||
if *update_state == FeeUpdateState::RemoteAnnounced { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could have a debug_assert!(!self.is_outbound())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, could, also dunno if its worth a followup to add a debug assertion :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Narrator voice : it was.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this is why I didn't:
error[E0502]: cannot borrow `*self` as immutable because it is also borrowed as mutable
--> lightning/src/ln/channel.rs:2480:20
|
2478 | if let &mut Some((_, ref mut update_state)) = &mut self.pending_update_fee {
| ---------------------------- mutable borrow occurs here
2479 | if *update_state == FeeUpdateState::RemoteAnnounced {
2480 | debug_assert!(!self.is_outbound());
| ^^^^ immutable borrow occurs here
2481 | *update_state = FeeUpdateState::AwaitingRemoteRevokeToAnnounce;
| -------------------------------------------------------------- mutable borrow later used here
enum FeeUpdateState { | ||
// Inbound states mirroring InboundHTLCState | ||
RemoteAnnounced, | ||
AwaitingRemoteRevokeToAnnounce, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: AwaitingRemoteRevokeToAnnounce
-> AwaitingRemoteRevokeToApply
, as you describe just under there is no notion of relaying-or-announcing-forward a update_fee, so better to remove the confusing ToAnnounce
ref imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to keep this matching the InboundHTLCState
stuff, renaming that is a bit of a separate question and large patch.
// always accepting up to 25 sat/vByte or 10x our fee estimator's "High Priority" fee. | ||
// We generally don't care too much if they set the feerate to something very high, but it | ||
// could result in the channel being useless due to everything being dust. | ||
let upper_limit = cmp::max(250 * 25, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be 253 here ? I don't have all the computation back in mind, but iirc if you commit your transactions at 250 sat per kWU they won't meet Core's min relay fee due to the round up at vbytes to weight units conversion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to care given its * 25
- you aren't at risk of underestimating :)
let holder_tx_dust_exposure = inbound_stats.on_holder_tx_dust_exposure_msat + outbound_stats.on_holder_tx_dust_exposure_msat; | ||
let counterparty_tx_dust_exposure = inbound_stats.on_counterparty_tx_dust_exposure_msat + outbound_stats.on_counterparty_tx_dust_exposure_msat; | ||
if holder_tx_dust_exposure > self.get_max_dust_htlc_exposure_msat() { | ||
return Err(ChannelError::Close(format!("Peer sent update_fee with a feerate ({}) which may over-expose us to dust-in-flight on our own transactions (totaling {} msat)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've a big wonder if this doesn't expose us to flood-and-loot style of attacks where a third-party to this link betting on upcoming feerate spikes forward through us a set of trimmed-to-dust HTLCs to trigger a force-close. I think it could be even done at scale across the network if this default configuration is widely deployed...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, but its better than force-closing on any feerate increase. That exposes us to the same set of issues, except instead of requiring some newly-dust HTLCs it just requires a feerate increase lol.
The docs were left stale after the logic was updated in lightningdevkit#985 as pointed out in post-merge review.
These were suggested to clarify behavior in post-merge review of lightningdevkit#985.
These were suggested to clarify behavior in post-merge review of lightningdevkit#985.
The docs were left stale after the logic was updated in lightningdevkit#985 as pointed out in post-merge review.
These were suggested to clarify behavior in post-merge review of lightningdevkit#985.
Based on #975, this automagically sends update_fee messages on outbound channels as necessary.