-
Notifications
You must be signed in to change notification settings - Fork 790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a dedicated thread for tcp listener #4523
Use a dedicated thread for tcp listener #4523
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are minor points to consider/discard/accept.
Overall I think moving to a single-threaded tcp_listener is a major benefit.
} | ||
|
||
void nano::transport::tcp_listener::on_connection (std::function<bool (std::shared_ptr<nano::transport::socket> const &, boost::system::error_code const &)> callback_a) | ||
void nano::transport::tcp_listener::wait_available_slots () |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we close excess incoming connections or let the OS do it? If we don't service connections the OS will queue them up to max_listener https://beta.boost.org/doc/libs/develop/doc/html/boost_asio/reference/basic_socket_acceptor/listen.html
@dsiganos?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we test this behaviour with more sockets queued than the max capacity @gr0vity-dev?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see a problem with this behaviour. This way effectively leaves the queueing to the operating system.
Another way would be to accept a socket and, if the limit is passed, close it.
8b9c36a
to
608d1f8
Compare
For future reference: there is a problem with using blocking
The blessed ASIO way of doing things seems to always be using async functions. Here I decided to go with |
ef48944
to
75fce7e
Compare
One thing that confused me is the name tcp_listerner although I don't know how to name it better (tcp_server_container maybe?). |
} | ||
|
||
void nano::transport::tcp_listener::on_connection (std::function<bool (std::shared_ptr<nano::transport::socket> const &, boost::system::error_code const &)> callback_a) | ||
void nano::transport::tcp_listener::wait_available_slots () |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see a problem with this behaviour. This way effectively leaves the queueing to the operating system.
Another way would be to accept a socket and, if the limit is passed, close it.
Also, the node does not stop when pressing Control-C because the the accept does not return probably because the io ctx is stopped by the signal handling code. |
# Conflicts: # nano/node/transport/socket.cpp # nano/node/transport/socket.hpp
# Conflicts: # nano/lib/thread_roles.cpp # nano/lib/thread_roles.hpp # nano/test_common/testutil.hpp
The issue with interrupt signal should now be fixed. |
This seems to have triggered a new TSAN warning, I'm not sure whether it's a false positive or not. I'm going to move to coroutine-based implementation with a strand to avoid these problems altogether. The changes necessary will be minimal. |
719588e
to
780a2d9
Compare
780a2d9
to
e93f243
Compare
I'm splitting this refactor and coroutines into two separate PRs. The currently approved code provides a solid foundation to build on, but introduces a new TSAN warning that will be fixed by a subsequent coroutine PR. |
While I'm not aware of any problems with previous implementation, the async way of implementing
tcp_listener
was generally harder to understand and to evolve over time. It was especially problematic when making changes to other aspects of networking code. Given that our connections are long-lived and relatively infrequent, there is very little need for tcp acceptor code to be focused on performance and instead we should prioritize readability and maintainability. This modifies it to instead run on a single, dedicated thread and use blocking IO operations.