Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design references #1

Open
PoneyClairDeLune opened this issue Oct 30, 2024 · 8 comments
Open

Design references #1

PoneyClairDeLune opened this issue Oct 30, 2024 · 8 comments

Comments

@PoneyClairDeLune
Copy link
Contributor

@PoneyClairDeLune
Copy link
Contributor Author

PoneyClairDeLune commented Nov 2, 2024

Should also support Brotli compression at level 1, because it's...

  • Performant, at around 300 MiB/s
  • Best worst-case outcome, only adding 4 bytes
  • Acceptable compression ratio, at 3.88

Can use Content-Encoding: br under HTTP mode to dynamically adapt.

@PoneyClairDeLune
Copy link
Contributor Author

https://github.com/httptoolkit/brotli-wasm

Potential Brotli implementation.

@PoneyClairDeLune
Copy link
Contributor Author

PoneyClairDeLune commented Nov 4, 2024


Potentially, handshakes can timeout after 10 to 30 seconds, while a connection can be considered dead after having no activities for 90 to 150 seconds. Browsers time out after 30 for handshakes, and 120 or 300 for activities.
The maximum non-multiplexed allowed connection count between the browser and the server should be 8, as per browser practices (6~8).

@PoneyClairDeLune
Copy link
Contributor Author

Firefox settings (network.http):

  • tls-handshake-timeout: 30
  • response.timeout: 300
  • connection-timeout: 90
  • connection-retry-timeout: 250
  • keep-alive.timeout: 115
  • largeKeepaliveFactor: 20
  • max-connections: 900
  • max-persistent-connections-per-proxy: 32
  • max-persistent-connections-per-server: 6
  • max-urgent-start-excessive-connections-per-host: 3
  • max_response_header_size: 393216
  • network-changed.timeout: 5
  • pacing.requests.burst: 10
  • pacing.requests.hz: 80
  • pacing.requests.min-parallelism: 6
  • request.max-attempts: 10
  • request.max-start-delay: 10
  • tcp_keepalive.long_lived_idle_time: 600
  • tcp_keepalive.short_lived_idle_time: 10
  • tcp_keepalive.short_lived_time: 60
  • http2.chunk-size: 16000
  • http2.default-concurrent: 100
  • http2.ping-timeout: 8
  • http2.ping-threshold: 58
  • http2.timeout: 170

@PoneyClairDeLune
Copy link
Contributor Author

@PoneyClairDeLune
Copy link
Contributor Author

PoneyClairDeLune commented Nov 14, 2024

Some considerations to increase throughput or not, taken from designs of NGINX and I2P.

  • Because NGINX (and likely other web servers, excluding CDNs) imposes limits on how long requests could be streamed, the underlying web requests should have send and receive phases capped at a certain timeout that's below NGINX limits. Conforming to these standardized web servers will unintentionally cause all underlying stateful connections to have a short lifespan.
  • Since Ditzy operates on a per-message basis, as long as the messages will reach the destination, the states of all underlying connections do not matter for all the sockets reconstructed over it.
    • As such, if load-balancing reverse proxies do not interfere with the final destination the messages will reach, throughput could've been increased via distributing messages across multiple underlying connections, or select a portion of underlying connections for high loads to leave some leeway for small but quick message delivery (e.g. live-streaming with stream chat). Or else, messages to the same socket ID with the same client ID should only be delivered over underlying connections guaranteeing the same destination.
    • Operating on a per-message basis, again, should prove it easy to implement connection migration, as there are no requirements to deliver the egress and ingress packets over the same underlying connection.

@PoneyClairDeLune
Copy link
Contributor Author

There is another possible attack with client IDs. Since server egress load would be distributed across all underlying connections set with the same client ID, there exists a chance where an attacker could've got some of the victim's S-C messages. Client ID bind messages could've added an extension that allows integration of a simple key verification/challenge if concerned, and rate limits could've been placed to limit how fast client ID binds can happen.

However, with an ID space of 1 to 268435455 (228-1) and with some rate limits set, a collision is unlikely to happen. A successful collision will take 13768938.79 (5th percentile), 28282497.91 (10%), 186065278.45 (50%), 618095475.96 (90%) and 804160754.41 (95%) attempts respectively on average. If a once-per-second rate limit is placed on client ID binds, a collision will happen in 159.36 days under the unluckiest circumstances. Unless proven otherwise, a verification/challenge scheme is largely not needed.

@PoneyClairDeLune
Copy link
Contributor Author

Just a thought: Apart from forcibly offering reconstructed duplex connections, Ditzy could also be used as a multiplexing scheme...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant