Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically Update fees on outbound channels #985

Merged

Conversation

TheBlueMatt
Copy link
Collaborator

Based on #975, this automagically sends update_fee messages on outbound channels as necessary.

@TheBlueMatt TheBlueMatt added this to the 0.0.100 milestone Jul 5, 2021
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch 3 times, most recently from 91f8a1d to 09454c5 Compare July 5, 2021 18:14
@codecov
Copy link

codecov bot commented Jul 5, 2021

Codecov Report

Merging #985 (36f19a1) into main (9d8d24f) will increase coverage by 0.00%.
The diff coverage is 92.04%.

❗ Current head 36f19a1 differs from pull request most recent head d3af49e. Consider uploading reports for the commit d3af49e to get more accurate results
Impacted file tree graph

@@           Coverage Diff            @@
##             main     #985    +/-   ##
========================================
  Coverage   90.94%   90.95%            
========================================
  Files          63       64     +1     
  Lines       32150    32393   +243     
========================================
+ Hits        29240    29463   +223     
- Misses       2910     2930    +20     
Impacted Files Coverage Δ
lightning/src/chain/chaininterface.rs 0.00% <0.00%> (ø)
lightning/src/ln/mod.rs 90.00% <ø> (ø)
lightning/src/ln/channelmanager.rs 86.11% <79.22%> (-0.16%) ⬇️
lightning/src/ln/channel.rs 89.37% <89.32%> (-0.14%) ⬇️
lightning/src/ln/chanmon_update_fail_tests.rs 97.86% <98.52%> (+0.05%) ⬆️
lightning-background-processor/src/lib.rs 93.93% <100.00%> (+0.06%) ⬆️
lightning/src/ln/functional_tests.rs 97.26% <100.00%> (+0.01%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9d8d24f...d3af49e. Read the comment docs.

@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch 2 times, most recently from bf62104 to a0f981c Compare July 8, 2021 01:14
@TheBlueMatt
Copy link
Collaborator Author

I'm rewriting the outbound update_fee sending state tracking.

@TheBlueMatt TheBlueMatt marked this pull request as draft July 9, 2021 17:44
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch 4 times, most recently from a826a80 to 884925d Compare July 13, 2021 16:28
@TheBlueMatt
Copy link
Collaborator Author

Modulo some missing tests this should be good to go.

@TheBlueMatt TheBlueMatt marked this pull request as ready for review July 13, 2021 16:28
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch 2 times, most recently from 0c4bc18 to 8de0043 Compare July 13, 2021 18:01
Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed until 4666184

if chan.get_feerate() < new_feerate || chan.get_feerate() > new_feerate * 2 {
log_trace!(self.logger, "Channel {} qualifies for a feerate change from {} to {}. Checking if its live ({}).",
log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate, chan.is_live());
if chan.is_live() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about logging the channels non update-fee bumped with their currently committed feerate ?

As is_live() encompass peer disconnection, if this state persists and pre-signed feerate starts to sink deeper w.r.t to network mempools feerate groups, a node operator might decide to preemptively close them ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused, the above log does always log if a channel qualifies for a feerate update, even if the peer is disconnected.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah you right! And that's should be a case covered by #993, good here.

lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Show resolved Hide resolved
@@ -3843,6 +3863,33 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> MessageSend
result = NotifyOption::DoPersist;
}

if self.startup_complete.load(Ordering::Relaxed) != 0 {
let new_feerate = self.fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::Normal);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm should we use ConfirmationTarget::HighPriority by default ?

Are we operating under the assumptions that a) our counterparty can go offline at anytime and b) mempool can spikes making the obsolete the pre-signed feerate to pass mempools min feerate ?

If we want to avoid feerate overhead in case of unilateral closure, and the counterparty is online, we can still update_fee adjust-in-time before to enforce the closing?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose if we broadcasted our commitment transaction quickly after our counterparty went offline (which we won't but assuming we did), then we could use Normal and its ok. I think we should use Normal until we have anchor, then switch anchor to Normal (or maybe Background * 2?) then switch non-anchor to something higher? The issue is really that HighPriority may just be a very high fee and users would complain.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well this feerate should only play out in case of non-cooperative closure, which should be the minority of all channel closures so intuitively I would say we can be generous with it and user shouldn't complain ?
Though i concede you can still have a lot of closures for liquidity operations/channel management tools, and you would like to be conservative....

W.r.t to anchor support, i think we could extend our FeeEstimator API to ask for mempool min feerate ? And I can extend Core's estimaterawfee to return this value, pending for package relay deployment ? And yes switch legacy fee-bumping to a default higher feerate.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

W.r.t to anchor support, i think we could extend our FeeEstimator API to ask for mempool min feerate ?

Yea, this is a good question. I'm not sure what the right answer is, sadly most fee estimators have no concept of "mempool min fee".

log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate, chan.is_live());
if chan.is_live() {
should_persist = NotifyOption::DoPersist;
let res = match chan.send_update_fee_and_commit(new_feerate, &self.logger) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we check that the proposed update_fee (or received!) is at least equal or superior to mempool min relay fee ? The check enforced here : https://github.com/bitcoin/bitcoin/blob/531c2b7c04898f5a2097f44e8c12bfb2f53aaf9b/src/validation.cpp#L519

I don't see it in Channel::check_remote_fee or send_update_fee. We have a worst-case check on the receiver-side on ConfirmationTarget::Background, but not on the sender. And we might also have it as a belt-and-suspender in case of buggy/compromised fee-estimators ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, true, I think we need to apply that everywhere we call get_est_sat_per_1000_weight, though, not just here. Maybe we can open an issue and add assertions everywhere we call it later?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tracked : #1016

lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@@ -301,6 +301,21 @@ pub struct CounterpartyForwardingInfo {
pub cltv_expiry_delta: u16,
}

/// When a node increases the feerate on an outbound channel, it can cause the channel to get
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "Updating fee and paying for them is the responsibility is on the channel initiator in the update_fee fee-bumping schemes. When it does increase the feerate on an outbound channel", good to quickly recall on who is the fee-burden

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rewrote it a chunk.

/// balance is right at or near the channel reserve, neither side will be able to send an HTLC.
/// Thus, before sending an HTLC when we are the initiator, we check that the feerate can increase
/// by this multiple without hitting this case, before sending.
/// This multiple is effectively the maximum feerate "jump" we expect until more HTLCs flow over
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the issue solved by lightning/bolts#740 right ?

IIRC, the new reserve is to "cover" the cost of new HTLC flowing, though it can also serve as a update_fee reserve but maybe in that case we should bump it a bit more compared to spec recommendations ?

Odds of channel stuck are going to decrease with only the commitment transaction size decreasing or the initiator balance increasing (though not in case of mempools emptying as we base the upper bound on CHAN_STUCK_FEE_INCREASE_MULTIPLE * 2 in the newer ChannelManager::update_channel_fee ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's the relevant issue. Note that there is no way to "properly" protect ourselves because its not enough to just bound single fee increases by some constant, we have to bound the total fee increase between now and when we next send an HTLC to the constant. I agree we should increase this beyond 2x, but its out of scope of this PR and requires fixing a number of tests.

@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch 2 times, most recently from 899d282 to fd5d84b Compare July 21, 2021 14:56
@TheBlueMatt
Copy link
Collaborator Author

Added a testt for Fix handling of inbound uncommitted feerate updates

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed until fd5d84b, overall SGTM though I would like to have another look on InboundFeeUpdateState correctness.

And good to have another pair of eyes.

@@ -676,7 +676,7 @@ impl<Signer: Sign> Channel<Signer> {
if feerate_per_kw < lower_limit {
return Err(ChannelError::Close(format!("Peer's feerate much too low. Actual: {}. Our expected lower limit: {}", feerate_per_kw, lower_limit)));
}
let upper_limit = fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::HighPriority) as u64 * 2;
let upper_limit = fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::HighPriority) as u64 * 4;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can even give more leeway to our users but hardcode a worst top mempool feerate observed on the last 2 or 4 years ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I don't think we want to let it get too crazy. Correct me if I'm wrong, but I think the channel funder (who is a miner) can burn their counterparty money if they can set the fee very high, and then cause their counterparty's balance to just be dust?

@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch from fd5d84b to 8f50978 Compare July 26, 2021 16:06
@jkczyz jkczyz self-requested a review July 26, 2021 19:11
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch from 8f50978 to 2d51b45 Compare July 26, 2021 19:16
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch from 2d51b45 to 691b865 Compare July 28, 2021 15:10
@TheBlueMatt
Copy link
Collaborator Author

Rebased on upstream, squashing fixup commits to make the rebase cleaner.

Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ACK mod Jeff's comments that were left on #1011

@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch 2 times, most recently from 3f66c19 to 8b795af Compare August 12, 2021 17:49
Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK mod the one comment + squash

lightning/src/ln/channelmanager.rs Show resolved Hide resolved
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch from 8b795af to dc1e110 Compare August 12, 2021 21:24
@TheBlueMatt
Copy link
Collaborator Author

Dropped the feerate update in get_and_clear_pending_msg_events and just made the API that you have to call timer_tick on startup as well.

@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch from dc1e110 to 993348b Compare August 12, 2021 21:28
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch 3 times, most recently from 91c35d7 to 1f6e46a Compare August 13, 2021 21:52
Previously we'd been expecting to implement anchor outputs before
shipping 0.1, thus reworking our channel fee update process
entirely and leaving it as a future task. However, due to the
difficulty of working with on-chain anchor pools, we are now likely
to ship 0.1 without requiring anchor outputs.

In either case, there isn't a lot of reason to require that users
call an explicit "prevailing feerates have changed" function now
that we have a timer method which is called regularly. Further, we
really should be the ones deciding on the channel feerate in terms
of the users' FeeEstimator, instead of requiring users implement a
second fee-providing interface by calling an update_fee method.

Finally, there is no reason for an update_fee method to be
channel-specific, as we should be updating all (outbound) channel
fees at once.

Thus, we move the update_fee handling to the background, calling it
on the regular 1-minute timer. We also update the regular 1-minute
timer to fire on startup as well as every minute to ensure we get
fee updates even on mobile clients that are rarely, if ever, open
for more than one minute.
When we send an update_fee to our counterparty on an outbound
channel, if we need to re-send a commitment update after
reconnection, the update_fee must be present in the re-sent
commitment update messages. However, wewere always setting the
update_fee field in the commitment update to None, causing us to
generate invalid commitment signatures and get channel
force-closures.

This fixes the issue by correctly detecting when an update_fee
needs to be re-sent, doing so when required.
If we receive an update_fee but do not receive a commitment_signed,
we should not persist the pending fee update to disk or hold on to
it after our peer disconnects.

In order to make the code the most readable, we add a state enum
which matches the relevant states from InboundHTLCState, allowing
for more simple code comparison between inbound HTLC handling and
update_fee handling.
Inbound fee udpates are rather broken in lightning as they can
impact the non-fundee despite the funder paying the fee, but only
in the dust exposure it places on the fundee.

At least lnd is fairly aggressively high in their (non-anchor) fee
estimation, running the risk of force-closure. Further, because we
relied on a fee estimator we don't have full control over, we
were assuming our users' fees are particularly conservative, and
thus were at a lot of risk to force-closures.

This converts our fee limiting to use an absurd upper bound,
focusing on whether we are over-exposed to in-flight dust when we
receive an update_fee.
@TheBlueMatt TheBlueMatt force-pushed the 2021-06-auto-chan-fee-updates branch from 1f6e46a to d3af49e Compare August 13, 2021 21:54
@TheBlueMatt
Copy link
Collaborator Author

Squashed fixup commits. I had missed a few of Jeff's comments from yesterday on #1011 which applied here, so included those as well. Diff from Val's ack follows, ignoring a minor reversion of a change to full_stack_target.rs which impacts the hard-coded test and thus is rather long byte-size.

$ git diff-tree -U1 8b795af d3af49e lightning
diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs
index dc513865c..a0071d543 100644
--- a/lightning/src/ln/channel.rs
+++ b/lightning/src/ln/channel.rs
@@ -1025,4 +1025,4 @@ impl<Signer: Sign> Channel<Signer> {
 	/// transaction, the list of HTLCs which were not ignored when building the transaction).
-	/// Note that below-dust HTLCs are included in the third return value, but not the second, and
-	/// sources are provided only for outbound HTLCs in the third return value.
+	/// Note that below-dust HTLCs are included in the fourth return value, but not the third, and
+	/// sources are provided only for outbound HTLCs in the fourth return value.
 	#[inline]
@@ -4946,9 +4946,7 @@ impl<Signer: Sign> Writeable for Channel<Signer> {
 			self.pending_update_fee.map(|(a, _)| a).write(writer)?;
-		} else {
+		} else if let Some((feerate, FeeUpdateState::AwaitingRemoteRevokeToAnnounce)) = self.pending_update_fee {
 			// As for inbound HTLCs, if the update was only announced and never committed, drop it.
-			if let Some((feerate, FeeUpdateState::AwaitingRemoteRevokeToAnnounce)) = self.pending_update_fee {
-				Some(feerate).write(writer)?;
-			} else {
-				None::<u32>.write(writer)?;
-			}
+			Some(feerate).write(writer)?;
+		} else {
+			None::<u32>.write(writer)?;
 		}
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index d29e42eed..7304c698a 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -506,8 +506,2 @@ pub struct ChannelManager<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref,
 
-	/// If feerates have gone up, we want to send fee updates on our outbound channels at least
-	/// once after startup and then on the timer. We thus hook it up to
-	/// `get_and_clear_pending_msg_events`, but use this to detect if we've already run once (by
-	/// setting to 0 on startup and 1 after the first message events clear).
-	startup_complete: AtomicUsize,
-
 	persistence_notifier: PersistenceNotifier,
@@ -1153,4 +1147,2 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
 
-			startup_complete: AtomicUsize::new(0),
-
 			keys_manager,
@@ -2574,8 +2566,13 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
 		if new_feerate <= chan.get_feerate() && new_feerate * 2 > chan.get_feerate() {
-			log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {}", log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
+			log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {}.",
+				log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
 			return (true, NotifyOption::SkipPersist, Ok(()));
 		}
-		log_trace!(self.logger, "Channel {} qualifies for a feerate change from {} to {}. Checking if it is live ({}).",
-			log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate, chan.is_live());
-		if !chan.is_live() { return (true, NotifyOption::SkipPersist, Ok(())); }
+		if !chan.is_live() {
+			log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {} as it cannot currently be updated (probably the peer is disconnected).",
+				log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
+			return (true, NotifyOption::SkipPersist, Ok(()));
+		}
+		log_trace!(self.logger, "Channel {} qualifies for a feerate change from {} to {}.",
+			log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
 
@@ -2648,3 +2645,3 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
 
-	/// Performs actions which should happen roughly once per minute.
+	/// Performs actions which should happen on startup and roughly once per minute thereafter.
 	///
@@ -4102,29 +4099,2 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> MessageSend
 
-			if self.startup_complete.load(Ordering::Relaxed) == 0 {
-				let new_feerate = self.fee_estimator.get_est_sat_per_1000_weight(ConfirmationTarget::Normal);
-
-				let mut handle_errors = Vec::new();
-				{
-					let mut channel_state_lock = self.channel_state.lock().unwrap();
-					let channel_state = &mut *channel_state_lock;
-					let pending_msg_events = &mut channel_state.pending_msg_events;
-					let short_to_id = &mut channel_state.short_to_id;
-					channel_state.by_id.retain(|chan_id, chan| {
-						let counterparty_node_id = chan.get_counterparty_node_id();
-						let (retain_channel, chan_needs_persist, err) = self.update_channel_fee(short_to_id, pending_msg_events, chan_id, chan, new_feerate);
-						if chan_needs_persist == NotifyOption::DoPersist { result = NotifyOption::DoPersist; }
-						if err.is_err() {
-							handle_errors.push((err, counterparty_node_id));
-						}
-						retain_channel
-					});
-				}
-
-				for (err, counterparty_node_id) in handle_errors.drain(..) {
-					let _ = handle_error!(self, err, counterparty_node_id);
-				}
-
-				self.startup_complete.store(1, Ordering::Release);
-			}
-
 			let mut pending_events = Vec::new();
@@ -5277,4 +5247,2 @@ impl<'a, Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
 
-			startup_complete: AtomicUsize::new(0),
-
 			keys_manager: args.keys_manager,

@TheBlueMatt
Copy link
Collaborator Author

Will merge after CI. For the record, full diff since Arik's ACK:

$ git diff-tree -U1 993348b d3af49e9
diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs
index dc513865c..a0071d543 100644
--- a/lightning/src/ln/channel.rs
+++ b/lightning/src/ln/channel.rs
@@ -1025,4 +1025,4 @@ impl<Signer: Sign> Channel<Signer> {
 	/// transaction, the list of HTLCs which were not ignored when building the transaction).
-	/// Note that below-dust HTLCs are included in the third return value, but not the second, and
-	/// sources are provided only for outbound HTLCs in the third return value.
+	/// Note that below-dust HTLCs are included in the fourth return value, but not the third, and
+	/// sources are provided only for outbound HTLCs in the fourth return value.
 	#[inline]
@@ -4946,9 +4946,7 @@ impl<Signer: Sign> Writeable for Channel<Signer> {
 			self.pending_update_fee.map(|(a, _)| a).write(writer)?;
-		} else {
+		} else if let Some((feerate, FeeUpdateState::AwaitingRemoteRevokeToAnnounce)) = self.pending_update_fee {
 			// As for inbound HTLCs, if the update was only announced and never committed, drop it.
-			if let Some((feerate, FeeUpdateState::AwaitingRemoteRevokeToAnnounce)) = self.pending_update_fee {
-				Some(feerate).write(writer)?;
-			} else {
-				None::<u32>.write(writer)?;
-			}
+			Some(feerate).write(writer)?;
+		} else {
+			None::<u32>.write(writer)?;
 		}
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index 76e5285c1..7304c698a 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -2566,8 +2566,13 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
 		if new_feerate <= chan.get_feerate() && new_feerate * 2 > chan.get_feerate() {
-			log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {}", log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
+			log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {}.",
+				log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
 			return (true, NotifyOption::SkipPersist, Ok(()));
 		}
-		log_trace!(self.logger, "Channel {} qualifies for a feerate change from {} to {}. Checking if it is live ({}).",
-			log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate, chan.is_live());
-		if !chan.is_live() { return (true, NotifyOption::SkipPersist, Ok(())); }
+		if !chan.is_live() {
+			log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {} as it cannot currently be updated (probably the peer is disconnected).",
+				log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
+			return (true, NotifyOption::SkipPersist, Ok(()));
+		}
+		log_trace!(self.logger, "Channel {} qualifies for a feerate change from {} to {}.",
+			log_bytes!(chan_id[..]), chan.get_feerate(), new_feerate);
 
$

@TheBlueMatt TheBlueMatt merged commit a369f9e into lightningdevkit:main Aug 13, 2021
self.pending_update_fee = None;
}
if let &mut Some((_, ref mut update_state)) = &mut self.pending_update_fee {
if *update_state == FeeUpdateState::RemoteAnnounced {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could have a debug_assert!(!self.is_outbound())

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, could, also dunno if its worth a followup to add a debug assertion :)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Narrator voice : it was.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this is why I didn't:

error[E0502]: cannot borrow `*self` as immutable because it is also borrowed as mutable
    --> lightning/src/ln/channel.rs:2480:20
     |
2478 |         if let &mut Some((_, ref mut update_state)) = &mut self.pending_update_fee {
     |                                                       ---------------------------- mutable borrow occurs here
2479 |             if *update_state == FeeUpdateState::RemoteAnnounced {
2480 |                 debug_assert!(!self.is_outbound());
     |                                ^^^^ immutable borrow occurs here
2481 |                 *update_state = FeeUpdateState::AwaitingRemoteRevokeToAnnounce;
     |                 -------------------------------------------------------------- mutable borrow later used here

lightning/src/ln/channel.rs Show resolved Hide resolved
lightning/src/ln/channel.rs Show resolved Hide resolved
enum FeeUpdateState {
// Inbound states mirroring InboundHTLCState
RemoteAnnounced,
AwaitingRemoteRevokeToAnnounce,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: AwaitingRemoteRevokeToAnnounce -> AwaitingRemoteRevokeToApply, as you describe just under there is no notion of relaying-or-announcing-forward a update_fee, so better to remove the confusing ToAnnounce ref imo

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to keep this matching the InboundHTLCState stuff, renaming that is a bit of a separate question and large patch.

lightning/src/ln/channel.rs Show resolved Hide resolved
// always accepting up to 25 sat/vByte or 10x our fee estimator's "High Priority" fee.
// We generally don't care too much if they set the feerate to something very high, but it
// could result in the channel being useless due to everything being dust.
let upper_limit = cmp::max(250 * 25,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be 253 here ? I don't have all the computation back in mind, but iirc if you commit your transactions at 250 sat per kWU they won't meet Core's min relay fee due to the round up at vbytes to weight units conversion.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to care given its * 25 - you aren't at risk of underestimating :)

let holder_tx_dust_exposure = inbound_stats.on_holder_tx_dust_exposure_msat + outbound_stats.on_holder_tx_dust_exposure_msat;
let counterparty_tx_dust_exposure = inbound_stats.on_counterparty_tx_dust_exposure_msat + outbound_stats.on_counterparty_tx_dust_exposure_msat;
if holder_tx_dust_exposure > self.get_max_dust_htlc_exposure_msat() {
return Err(ChannelError::Close(format!("Peer sent update_fee with a feerate ({}) which may over-expose us to dust-in-flight on our own transactions (totaling {} msat)",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've a big wonder if this doesn't expose us to flood-and-loot style of attacks where a third-party to this link betting on upcoming feerate spikes forward through us a set of trimmed-to-dust HTLCs to trigger a force-close. I think it could be even done at scale across the network if this default configuration is widely deployed...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, but its better than force-closing on any feerate increase. That exposes us to the same set of issues, except instead of requiring some newly-dust HTLCs it just requires a feerate increase lol.

TheBlueMatt added a commit to TheBlueMatt/rust-lightning that referenced this pull request Aug 16, 2021
The docs were left stale after the logic was updated in lightningdevkit#985 as
pointed out in post-merge review.
TheBlueMatt added a commit to TheBlueMatt/rust-lightning that referenced this pull request Aug 16, 2021
These were suggested to clarify behavior in post-merge review of lightningdevkit#985.
TheBlueMatt added a commit to TheBlueMatt/rust-lightning that referenced this pull request Sep 9, 2021
These were suggested to clarify behavior in post-merge review of lightningdevkit#985.
TheBlueMatt added a commit to TheBlueMatt/rust-lightning that referenced this pull request Sep 9, 2021
The docs were left stale after the logic was updated in lightningdevkit#985 as
pointed out in post-merge review.
TheBlueMatt added a commit to TheBlueMatt/rust-lightning that referenced this pull request Sep 9, 2021
These were suggested to clarify behavior in post-merge review of lightningdevkit#985.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants