Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pad ohttp req/res messages to consistent 8192 bytes #395

Merged
merged 2 commits into from
Nov 29, 2024

Conversation

DanGould
Copy link
Contributor

@DanGould DanGould commented Nov 20, 2024

Padded messages (POST, GET, Response 202, Response 200) are no longer distinguishable

Rationale: 7kb x 2x/min / 1h = ~1MB/hr which is not too much overhead.

  • We must also specify a 30s timeout in BIP 77 for long polling and say that clients MUST poll as soon as they receive a response so as not to fingerprint clients based on request timing. TODO left in BIP comments
  • allocate fixed length arrays instead of resizeing vectors.

@DanGould DanGould force-pushed the uniform-ohttp-payloads branch 2 times, most recently from dedec61 to b404a91 Compare November 20, 2024 02:26
Clients pad OHTTP requests and responses so that when they're sent to
an OHTTP relay that relay can't distingush the type of BIP 77 message
e.g. POST, GET, Response 202, Response 200.

Co-authored-by: nothingmuch <[email protected]>
@DanGould
Copy link
Contributor Author

If these arguments were statically sized we wouldn't need to test it. Do that

Copy link
Collaborator

@spacebear21 spacebear21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

concept ACK

payjoin-directory/src/lib.rs Outdated Show resolved Hide resolved
payjoin/src/send/mod.rs Outdated Show resolved Hide resolved
@coveralls
Copy link
Collaborator

coveralls commented Nov 27, 2024

Pull Request Test Coverage Report for Build 12063584579

Details

  • 35 of 53 (66.04%) changed or added relevant lines in 8 files are covered.
  • 2 unchanged lines in 1 file lost coverage.
  • Overall coverage decreased (-0.1%) to 61.494%

Changes Missing Coverage Covered Lines Changed/Added Lines %
payjoin/src/send/error.rs 0 2 0.0%
payjoin-cli/src/app/v2.rs 0 3 0.0%
payjoin/src/receive/v2/mod.rs 14 20 70.0%
payjoin/src/receive/v2/error.rs 0 7 0.0%
Files with Coverage Reduction New Missed Lines %
payjoin/src/send/mod.rs 2 92.78%
Totals Coverage Status
Change from base Build 12037630831: -0.1%
Covered Lines: 2790
Relevant Lines: 4537

💛 - Coveralls

Comment on lines 362 to 364
let mut res_buf = [0u8; crate::ohttp::ENCAPSULATED_MESSAGE_BYTES];
res_buf[..response.len()].copy_from_slice(response);
let response = ohttp_decapsulate(self.ohttp_ctx, &res_buf)
Copy link
Collaborator

@spacebear21 spacebear21 Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following the same pattern as V2GetContext::process_response:

Suggested change
let mut res_buf = [0u8; crate::ohttp::ENCAPSULATED_MESSAGE_BYTES];
res_buf[..response.len()].copy_from_slice(response);
let response = ohttp_decapsulate(self.ohttp_ctx, &res_buf)
let response_array: &[u8; crate::ohttp::ENCAPSULATED_MESSAGE_BYTES] =
response
.try_into()
.map_err(|_| InternalValidationError::UnexpectedResponseSize(response.len()))?;
let response = ohttp_decapsulate(ohttp_ctx, response_array)

Also seems better to propagate an error instead of truncating to response.len() in case the length doesn't match the expected size.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is important, because if response.len() > crate::ohttp::ENCAPSULATED_MESSAGE_BYTES then this will panic due to slice out of range crashing the directory

Copy link
Collaborator

@nothingmuch nothingmuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i selected "Request changes" because spacebear's suggestion is required (addresses out of range panic in directory). my suggestions are mere comments.


let mut bhttp_req = [0u8; PADDED_BHTTP_REQ_BYTES];
let mut cursor = std::io::Cursor::new(&mut bhttp_req[..]);
let _ = bhttp_message.write_bhttp(bhttp::Mode::KnownLength, &mut cursor);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC wrapping with Cursor is not necessary as a mut slice implements Write (not sure if write_bhttp needs Seek?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, we need .as_mut_slice. We kept missing this method when we stayed up late trying to fix it.

Comment on lines 362 to 364
let mut res_buf = [0u8; crate::ohttp::ENCAPSULATED_MESSAGE_BYTES];
res_buf[..response.len()].copy_from_slice(response);
let response = ohttp_decapsulate(self.ohttp_ctx, &res_buf)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is important, because if response.len() > crate::ohttp::ENCAPSULATED_MESSAGE_BYTES then this will panic due to slice out of range crashing the directory

@@ -23,7 +23,8 @@ pub const DEFAULT_DIR_PORT: u16 = 8080;
pub const DEFAULT_DB_HOST: &str = "localhost:6379";
pub const DEFAULT_TIMEOUT_SECS: u64 = 30;

const MAX_BUFFER_SIZE: usize = 65536;
const PADDED_BHTTP_BYTES: usize = 8192;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably document the total waste (8192 - ohttp overhead - 7168) somewhere, but not sure where is a good place to do that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering about this. Can we match them to create no or less waste? my understanding is that we just need leeway for sufficiently long URLs in headers

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when we tried to account for the bhttp overhead we got lost in details and specifically how it varies... i'm not sure what we can promise there

my framing it as "waste" was misleading, so we should come up with a way of explaining this gap in sizes more clearly.

the bhttp encoding overhead, any variability in the vhost, uri path, etc must all fit in this ~1kib budget... that seems reasonable as a value, i'm not sure if we should match these more closely although i can only think of pretty contrived reasons why you'd that much

Copy link
Contributor Author

@DanGould DanGould Nov 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

path in an OHTTP request is in practice quite small since the target path details are encapsulated and the request ought to go to the OHTTP gateway to be marshaled to the specific path target encoded in the BHTTP. In our examples, we just use / as the path, and the host https://payjo.in'. https://payjo.in/` is 17 bytes. That makes me think this overhead is actually a lot more than it needs to be and 256 bytes, being an order of magnitude greater, is almost certainly enough for any legitimate use assuming BIP 77 is the primary use. Getting 768 additional bytes if we used 256 bytes in the payload is an ~11% increase in payload size, which is a pretty big gain for working with PSBTs for payjoins.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i thought 7168 seemed and was deemed sufficient for the 2 party case? has anything changed the underlying rationale for that? FWIW a similar% gain would be obtained by not encoding the psbt as base64 in v2

Take advantage of the edit to use `&[u8]` function signatures where
applicable to reduce tech debt.
Copy link
Collaborator

@nothingmuch nothingmuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems mergeable to me as is.

However, if there are good reasons to doubt 7168 * 6/8 = ~5KiB PSBT as being sufficiently large, or 8192 as being sufficiently small then it should probably be addressed.

Both seem rather arbitrary to me, but ~5KiB being sufficiently large for BIP 77 is the more important one to get right i guess? A budget of about a dozen each of inputs and outputs per party with room to spare seems like it's more than enough for the stated use cases.

The only argument I can think of that might make a difference in user experience -- and even that is very dubious -- depends on the actual MTU. The lowest value on wikipedia for common media is 1452, which * 5 = 7260. For one packet to make a perceptible difference on average would imply the user is suffering from pretty extreme packet loss. Assuming this argument actually makes sense (again I think it doesn't) then i guess we should calculate and verify precise values for the bhttp overhead etc to make sure it all fits in less than 7260 bytes. Otherwise the argument for utilizing the existing headroom in 8192 bytes more efficiently seems very marginal for the 2 party case.

@DanGould
Copy link
Contributor Author

This seems mergeable to me as is

You sound very confident that this will not be followed with a breaking change. Merging.

@DanGould DanGould merged commit 5812c84 into payjoin:master Nov 29, 2024
6 checks passed
@DanGould DanGould deleted the uniform-ohttp-payloads branch November 29, 2024 13:56
@nothingmuch
Copy link
Collaborator

You sound very confident that this will not be followed with a breaking change. Merging.

Merely very confident that I can't come up with a good argument for increasing or decreasing either of the two sizes ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

4 participants