-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChannelSigner::sign_holder_commitment_and_htlcs causes panic when returning Err #2520
Comments
We absolutely should! Ideally all the signer methods are fallible/async in some way (#2088), but this may well be a ticker part of that overall work. We do have some retry logic for broadcasting - |
Looks like we'll also panic on failures from rust-lightning/lightning/src/chain/onchaintx.rs Lines 607 to 609 in afdcd1c
So I think the idea here would be to get rid of the unwrap/expect calls, and allow the failure to propagate up the call stack. If the signer fails to sign, you can either:
As for
Instead of broadcasting manually within the monitor on pre-anchors channels, we should just queue a claim to the
Here, we already queue a claim to the |
One issue I'm seeing with using It seems like we'll either need to make some changes to Am I understanding correctly? WDYT? |
Hmm, I guess we could go with the latter approach. The only issue is that, since the signer only knows about the transaction itself, we can't easily map back to the originating request without doing a linear search or adding an additional tracking map for requests that failed with something like Another approach would be to expose a future as similarly done in #1980. When the future resolves with the signature, we'd call back into LDK to do any remaining work. This could be generally applied to all signing functions, but would also result in a lot more complexity. Any thoughts @TheBlueMatt? |
If we have a signing request that doesn't resolve and a new block is mined leading to a transaction bump, we'll just have another signing request to resolve. As long as we handle both as normal, it should be fine. |
So I'm off on a tangent in #2487 which is trying to address the same problem except for After some discussion there, we decided to go for an approach of soft-failing out and explicitly requiring the signer to retry. The implications of doing this is that you end up with a bunch of "retry points" for each channel. I think I can make this work for We could obviously try to extending this approach to every single method in the channel signer. That's probably not going to be super great, but if it is what we opt to do then I think doing it with some sort of generic continuation-passing / future mechanism would be the way to go for example. (Maybe this overlaps with the stuff in ln/util/wakers.rs, not sure.) This was kind of the direction I was going before @devrandom and @TheBlueMatt suggested we just do retry. |
I think those may be the harder ones to get working. signing the counterparty commitment already has retry points (because we do it on reconnect or on monitor update completion). I think we should see what happens if we keep going down the path you're on before we fall back to trying something else. Sadly we can't "just" use rust futures since we support non-async environments (and Rust async functions are colored functions), not to mention such futures require pin and restarting them is a pain. We don't currently have any such infrastructure in |
There are currently several
expect
s inOnChainTxHandler
(e.g.,get_fully_signed_holder_tx
) that make it such that any implementation ofChannelSigner::sign_holder_commitment_and_htlcs
that returns Err will panic LDK.I am wondering if it is possible to rethink how this works (e.g., caching the signatures when we advance the channel state?) so that the existing callers in OnChainTxHandler might be able to access the commitment and HTLC signatures without querying the signer.
The text was updated successfully, but these errors were encountered: