Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(dht/encryption): greatly reduce heap allocations for encrypted messaging #4753

Conversation

sdbondi
Copy link
Member

@sdbondi sdbondi commented Sep 29, 2022

Description

  • Encrypt, decrypt and message padding mutate a single buffer for encrypted messages

Motivation and Context

Encrypted message handling should be as efficient as possible. The previous implementation performed allocations of the full padded message size twice for encryption and twice for decryption. Increasing memory usage, and negating the performance benefits of using an encryption keystream.

This PR allocates a single buffer for the message to be de/encrypted and de/encrypts the contents in-place using the BytesMut type from the bytes crate.

How Has This Been Tested?

This change is backwards compatible, tested on current esme network and updated existing tests as required.
Discovery: OK
Memorynet: OK
PingPong: OK
InteractiveTransactions: OK
SafTransactions: OK

@sdbondi sdbondi added the P-controversial Process - This PR or Issue is controversial and/or requires more attention that simpler issues label Sep 29, 2022
@sdbondi sdbondi force-pushed the dht-message-padding-encrypt-optimise branch from 9e23de3 to 94be522 Compare September 29, 2022 07:51
@sdbondi sdbondi marked this pull request as draft September 29, 2022 08:28
@sdbondi sdbondi force-pushed the dht-message-padding-encrypt-optimise branch from 94be522 to c7a2c2b Compare September 29, 2022 08:54
@sdbondi sdbondi marked this pull request as ready for review September 29, 2022 08:54
CjS77
CjS77 previously approved these changes Sep 29, 2022
Copy link
Collaborator

@CjS77 CjS77 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK

Looks great. 1 honest question, and one nit/suggestion

node_identity: &NodeIdentity,
body: Vec<u8>,
body: &T,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might be able to save a lot of &[...].to_vec() calls with this type of thing:

fn foo<R: Debug + ?Sized, T: AsRef<R>>(t: T)  {
    println!("{:?}", t.as_ref());
}

fn main() {
    let v = vec![1];
    foo::<Vec<i32>, _>(&v);
    foo::<[i32], _>(&[1,2,3]);
}

Copy link
Member Author

@sdbondi sdbondi Sep 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want this to work for any impl prost::Message, [u8] nor &[u8] implement this but Vec does which is why, for tests only, we get the wrong looking &[..].to_vec(). Some tests need to pass in a DhtEnvelope type which is the case where &T where T: prost::Message means the test doesn't have to serialize first (and we can use the non-test function prepare_message instead of reimplementing that for tests).

@@ -213,12 +213,12 @@ mod test {
#[test]
fn deterministic_hash() {
const TEST_MSG: &[u8] = b"test123";
const EXPECTED_HASH: &str = "d6333668f259f677703fbe4e89152ee41c7c01f6dec502befc63120246523ffe";
const EXPECTED_HASH: &str = "1c2bb1bcff443af4441b789bd1d6984bb8d7bed2c9f85e8cf4f45615fdd9e47d";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😕 Why does the hash change here? It looks like this should be a drop-in replacement?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah looks like a breaking change :) This has more to do with changing the test helper to take a prost::Message instead of raw bytes. Now the Vec is serialized into protobuf before being hashed, which matches thebehaviour of the real code.

@sdbondi sdbondi force-pushed the dht-message-padding-encrypt-optimise branch from 7617351 to d8aca37 Compare September 30, 2022 05:30
@jorgeantonio21
Copy link
Contributor

Really great ! Was unaware of this issue, and seeing how to fix it with BytesMut was very insightful. Thanks @sdbondi :)

@stringhandler stringhandler merged commit 195df85 into tari-project:development Oct 3, 2022
@sdbondi sdbondi deleted the dht-message-padding-encrypt-optimise branch October 3, 2022 12:01
sdbondi added a commit to sdbondi/tari that referenced this pull request Oct 3, 2022
* development:
  v0.38.5
  feat: different default grpc ports for different networks (tari-project#4755)
  fix(core): broken doctests (tari-project#4763)
  ci: fix coverage job
  ci: run coverage on prs (tari-project#4738)
  fix(comms): fixes edge case where online status event does not get published (tari-project#4756)
  fix(dht/encryption): greatly reduce heap allocations for encrypted messaging (tari-project#4753)
  docs: explain the emission curve parameters (tari-project#4750)
  fix(comms/peer_manager): add migration to remove onionv2 addresses (tari-project#4748)
  fix(ci): add cargo cache, reduce Ubuntu dependencies and action on pull_request (tari-project#4757)
  feat(tariscript): adds ToRistrettoPoint op-code (tari-project#4749)
  fix: cli wallet cucumber (tari-project#4739)
  fix(clients): fix tari nodejs client proto paths (tari-project#4743)
  chore: disallow onion v2 (tari-project#4745)
  feat: change priority in mempool to take into account age (tari-project#4737)
  feat: trigger mempool sync on lag (tari-project#4730)
  fix(core): use compact inputs for block propagation (tari-project#4714)
  ci: deny dbg macro (tari-project#4740)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P-controversial Process - This PR or Issue is controversial and/or requires more attention that simpler issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants