-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
impl: Client
and Clients
, how the server manages the different connected clients
#846
Conversation
src/hp/derp/clients.rs
Outdated
} | ||
|
||
pub async fn shutdown(self) { | ||
self.conn.shutdown().await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so I originally had these shutdowns happen in a spawned task & left to run on its own, without blocking.
I thought it might be better to not have to await
those shutdowns, since these shutdowns will be called in the main server loop. So, we could be waiting for these connections to shutdown instead of picking up the next task of work in the server loop.
However, if we spawn tasks for the shutdown
& don't wait for them to finish, we won't ever have a way to know if we've actually cleaned up all of our spawned tasks... so I made them async again, and we wait until the shutdown finishes before moving on.
Not sure if that's the correct trade-off, though, so would like some feedback.
73900d2
to
6755511
Compare
src/hp/derp/server.rs
Outdated
pub(crate) enum ServerMessage<C, R, W> | ||
where | ||
C: Conn, | ||
R: AsyncRead + Unpin + Send + Sync + 'static, | ||
W: AsyncWrite + Unpin + Send + Sync + 'static, | ||
{ | ||
AddWatcher(PublicKey), | ||
ClosePeer(PublicKey), | ||
SendPacket((PublicKey, Packet)), | ||
SendDiscoPacket((PublicKey, Packet)), | ||
CreateClient(ClientBuilder<C, R, W>), | ||
Unregister(PublicKey), | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because of the ClientBuilder
, we now have to drag around these generic types through anything that touches a ServerMessage
. Is there a better way to write this, so that I don't have to be dragging these types around?
@@ -0,0 +1,345 @@ | |||
//! based on tailscale/derp/derp_server.go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wrong comment?
`Client` will eventually be able to handle multiple `ClientConnManager`s, but for now, the lastest connection will be the one we hold onto The server will use the `Clients` struct to manages all the clients connection to the server. It is how it will actually send the packets from one client to another.
keep types used in multiple systems in the same file
1ffd5f1
to
62fa621
Compare
Decided I needed to super simplify this, for now. This impl no longer attempts to handle muliple connections with the same key or track if a client is connected to multiple derp servers.
Both situations are apparently not that common, but will be important to implement in the future in order to capture all situations.
This is the last large piece before turning to the actual server.