-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix epoll TCP implementation #3940
Conversation
Codecov Report
@@ Coverage Diff @@
## main #3940 +/- ##
==========================================
- Coverage 86.63% 86.50% -0.14%
==========================================
Files 56 56
Lines 16901 16901
==========================================
- Hits 14643 14621 -22
- Misses 2258 2280 +22 |
I assume this means our baremetal tests weren't ever hitting this partial sends/recvs (by chance)? |
I believe they do hit the partial sends/recvs most of the time. If not, it means we are not posting data faster than the socket buffers get drained by the kernel. It's also why the continue was added in the first place in the original PR because otherwise, the sends won't fully complete. The baremetal loopback tests without encryption also show very slow numbers. Much slower than what I got on my WSL VM actually. I remember I was able to hit 90Gbps with netput on my Z240 WSL VM with 1 connection while on the host I was able to get only 30-50Gbps inconsistently. There's definitely something not quite right. |
The difference here is that we're doing encrypted TCP. This is much different than simple netput. Crypto is a huge bottleneck. |
wait, "encrypt:0" not honored for TCP? |
No, that feature isn't supported for TCP. |
Description
There are a few bugs in the epoll TCP implementation.
Local perf test in WSL showed 2x perf improvement with loopback but the numbers from the perf machines didn't seem to move at all.
Testing
CI