Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add domain unix socket supports #594

Merged
merged 11 commits into from
Jul 19, 2024
Merged

Conversation

XxChang
Copy link
Collaborator

@XxChang XxChang commented Jul 17, 2024

The unix domain socket is more efficient than TCP socket in local linux machine.

The PR add support to unix domain socket.

@XxChang XxChang requested a review from phil-opp July 17, 2024 15:18
Copy link
Collaborator

@phil-opp phil-opp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot, looks very good overall!

libraries/core/src/config.rs Outdated Show resolved Hide resolved
libraries/core/src/daemon_messages.rs Outdated Show resolved Hide resolved
binaries/daemon/src/node_communication/mod.rs Outdated Show resolved Hide resolved
.github/workflows/ci.yml Outdated Show resolved Hide resolved
binaries/daemon/src/node_communication/unix_domain.rs Outdated Show resolved Hide resolved
apis/rust/node/src/daemon_connection/unix_domain.rs Outdated Show resolved Hide resolved
apis/rust/node/src/daemon_connection/unix_domain.rs Outdated Show resolved Hide resolved
@XxChang XxChang requested a review from phil-opp July 19, 2024 13:42
Copy link
Collaborator

@phil-opp phil-opp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick update!

@phil-opp phil-opp merged commit bccb1ae into main Jul 19, 2024
18 checks passed
@phil-opp phil-opp deleted the add_domain_unix_socket_supports branch July 19, 2024 15:22
@haixuanTao
Copy link
Collaborator

So, on my local test, I didn't see any difference in performance running dora-benchmark.

Wonder if this makes a difference on very specialized hardware.

@XxChang
Copy link
Collaborator Author

XxChang commented Jul 21, 2024

So, on my local test, I didn't see any difference in performance running dora-benchmark.

Wonder if this makes a difference on very specialized hardware.

I also did a test, got the same result. It's wired. In theory, it should be more efficient than Tcp socket. More details.

the following is UnixStream with set_nonblocking(true)

Latency:
size 0x0     : 537.864µs
size 0x8     : 674.116µs
size 0x40    : 397.551µs
size 0x200   : 566.487µs
size 0x800   : 1.306532ms
size 0x1000  : 462.612µs
size 0x4000  : 461.783µs
size 0xa000  : 398.878µs
size 0x64000 : 148.14771ms
size 0x3e8000: 6.20969162s
Throughput:
size 0x0     : 16925 messages per second
size 0x8     : 15287 messages per second
size 0x40    : 14140 messages per second
size 0x200   : 5721 messages per second
size 0x800   : 1886 messages per second
size 0x1000  : 3967 messages per second
size 0x4000  : 1166 messages per second
size 0xa000  : 451 messages per second
size 0x64000 : 59 messages per second
Input `latency` was closed
Input `throughput` was closed
size 0x3e8000: 7 messages per second

the following is UnixStream

Latency:
size 0x0     : 335.269µs
size 0x8     : 341.425µs
size 0x40    : 339.099µs
size 0x200   : 541.176µs
size 0x800   : 1.13206ms
size 0x1000  : 373.098µs
size 0x4000  : 354.6µs
size 0xa000  : 381.095µs
size 0x64000 : 236.397237ms
size 0x3e8000: 6.359789191s
Throughput:
size 0x0     : 754 messages per second
size 0x8     : 19929 messages per second
size 0x40    : 16591 messages per second
size 0x200   : 5783 messages per second
size 0x800   : 2154 messages per second
size 0x1000  : 4186 messages per second
size 0x4000  : 960 messages per second
size 0xa000  : 473 messages per second
size 0x64000 : 55 messages per second
Input `latency` was closed
Input `throughput` was closed
size 0x3e8000: 6 messages per second

the following is TcpStream

Latency:
size 0x0     : 425.477µs
size 0x8     : 454.158µs
size 0x40    : 485.851µs
size 0x200   : 625.459µs
size 0x800   : 1.45471ms
size 0x1000  : 504.22µs
size 0x4000  : 505.09µs
size 0xa000  : 549.074µs
size 0x64000 : 214.217185ms
size 0x3e8000: 6.407898767s
Throughput:
size 0x0     : 20018 messages per second
size 0x8     : 18672 messages per second
size 0x40    : 18060 messages per second
size 0x200   : 6620 messages per second
size 0x800   : 2312 messages per second
size 0x1000  : 3859 messages per second
size 0x4000  : 1342 messages per second
size 0xa000  : 501 messages per second
size 0x64000 : 53 messages per second
Input `latency` was closed
Input `throughput` was closed
size 0x3e8000: 6 messages per second

@phil-opp
Copy link
Collaborator

The overhead of TCP headers etc is probably small enough to not matter too much in our case. One of the comments in the stackoverflow post you linked reported 6us latency for TCP and 2us latency for Unix domain sockets. We have latencies of at least 400us, so the 4us difference don't matter much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants