Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timeout while waiting for headers #3178

Closed
SylvainGarrigues opened this issue Mar 18, 2023 · 15 comments · Fixed by #3185
Closed

timeout while waiting for headers #3178

SylvainGarrigues opened this issue Mar 18, 2023 · 15 comments · Fixed by #3185
Labels
C-feature Category: feature. This is adding a new feature.

Comments

@SylvainGarrigues
Copy link

SylvainGarrigues commented Mar 18, 2023

There doesn't seem a way to specify, on a server, after a client connection has been accepted, a read timeout so as to close the client connection after a while when no client headers have been sent at all. See for instance the ReadTimeout from GoLang’s HTTP Server class where you can configure this.

The http1_header_read_timeout method only starts the timer when the first line of the http1 header is sent. So if I just netcat into my hyper http server and don't send anything, the connection is kept open forever.

@SylvainGarrigues SylvainGarrigues added the C-feature Category: feature. This is adding a new feature. label Mar 18, 2023
@seanmonstar
Copy link
Member

Hm, I see that in #2675, it originally started the timer immediately, but it was suggested it should wait because some clients make connections eagerly. While there may be clients that do that, it does seem like the timer should probably start right away anyways. The servers resources are more constrained than a client.

@paolobarbolini or @silence-coding, any thoughts?

@SylvainGarrigues
Copy link
Author

If the timer doesn't start immediately (for h1 & h2 at least for now), I am reluctant to expose my rust's hyper server to the internet (without a reverse-proxy), since I can't see a way to setup an initial connection timeout, to prevent malicious people to open multiple (millions) sockets to my server (and therefore multiple file descriptors) without any way for me to close them.

Is this a feature or some sort of bug / security issue?

@paolobarbolini
Copy link
Contributor

paolobarbolini commented Mar 24, 2023

Sorry I missed the original notification. At the time this was originally implemented there was a feeling of: let's do the bare minimum in hyper and then let's have the user do everything else.
What this meant is that hyper itself has to enforce an header read timeout, because it can't be efficiently done outside of it, while a connection idle timeout wouldn't need to be handled by hyper because it could be implemented by a wrapper around the connection. This isn't perfect, because it means that your crate is vulnerable by default to DoS attacks, but that's probably something that could be handled a different way, like a second crate? But then having good defaults would be nice?
I remember @silence-coding didn't like that in my initial PR it was handled like #3185 does because then browsers couldn't preconnect because the connect would be shutdown a few seconds later because of the header timeout.
What do we think about this?

@SylvainGarrigues
Copy link
Author

Thanks a lot for taking this into consideration. Will #3185 also allow to have a proper read timeout for h2 / h3 only servers or is it just for http1 servers?

@SylvainGarrigues
Copy link
Author

I remember @silence-coding didn't like that in my initial PR it was handled like #3185 does because then browsers couldn't preconnect because the connect would be shutdown a few seconds later because of the header timeout.
What do we think about this?

Couldn’t we consider it is the hyper server’s owner responsibility to figure out a suitable header read timeout value, long enough to allow the legitimate browser pre-connect use case, but short enough to prevent DoS attacks?

For the sake of this issue, as long as I can set up a value, I am happy. I am more worried now that the fix only allows mitigating attacks for http1 hyper servers, not h2 nor h3 ones (and did not find the proper variable to set that yet).

@paolobarbolini
Copy link
Contributor

paolobarbolini commented Mar 25, 2023

Thanks a lot for taking this into consideration. Will #3185 also allow to have a proper read timeout for h2 / h3 only servers or is it just for http1 servers?

Just h1

For the sake of this issue, as long as I can set up a value, I am happy.

It's not right forcing everyone to set the same value for both timeouts though. If hyper really has to handle this it should be a separate timeout.

@SylvainGarrigues
Copy link
Author

Thanks a lot for taking this into consideration. Will #3185 also allow to have a proper read timeout for h2 / h3 only servers or is it just for http1 servers?

Just h1

How can I prevent DoS attacks (multiple idle opened connections with no headers sent) to my rust hyper h2 server, if I expose it directly to the internet?

@paolobarbolini
Copy link
Contributor

Disable h2 or have someone implement timeouts for it too

@seanmonstar
Copy link
Member

So, @programatik29 recently shared this with me: https://gist.github.com/programatik29/36d371c657392fd7f322e7342957b6d1

The idea there is to share a timer between the AsyncRead and the Service. Then it can coordinate that "headers" are received to the Service some certain amount of time after a poll_read has been called.

@rwthompsonii
Copy link

This really should just have a default timeout in hyper. Many crates use hyper as their server base. At this point it's starting to become the base implementation for the majority of web traffic in Rust. It shouldn't be the case that you can by-default DDOS web traffic. If we want to provide an override so that pre connect can be customized, sure, but we definitely should fix this.

@josecelano
Copy link

So, @programatik29 recently shared this with me: https://gist.github.com/programatik29/36d371c657392fd7f322e7342957b6d1

The idea there is to share a timer between the AsyncRead and the Service. Then it can coordinate that "headers" are received to the Service some certain amount of time after a poll_read has been called.

Hi, @seanmonstar. Is that included in the hyper 1.0 roadmap?

@seanmonstar
Copy link
Member

It is not a requirement for 1.0, but it could be added at any time.

@cbiffle
Copy link

cbiffle commented Jan 28, 2024

Then it can coordinate that "headers" are received to the Service some certain amount of time after a poll_read has been called.

@seanmonstar, is it just me, or does that implementation potentially fail to work on http2 if the first stream issues prompt headers but the others don't? I'm not super familiar with axum, but it looks like Accept is at the connection level, so the state transitions it's detecting are across all streams?

(I'm attempting to mitigate http2 slow header attacks in my server and found this bug when looking for suggestions. It looks like in 1.1 there are still no mitigations built into hyper, so an approach like this one would be useful... if it works.)

Edit: Basically, I'm having a hard time convincing myself that this shouldn't have e.g. a counting semaphore for tracking whether something is currently in header/trailer transfer state, rather than two booleans per connection.

@seanmonstar
Copy link
Member

seanmonstar commented Jan 28, 2024

@cbiffle I'm curious about your case. HTTP2 is different, so feel free to open another issue, with more details of what you're trying to stop.

@josecelano
Copy link

Hi, @seanmonstar. This feature was not included in version 1.0. Right?

I'm trying to implement it on the Axum server level.

I've created a sample repo:

https://github.com/josecelano/axum-server-timeout

I've only set the timeout for http1_header_read_timeout

In case other people are looking for the same, this is supposed to be the way to do it (but not tested):

https://gist.github.com/programatik29/36d371c657392fd7f322e7342957b6d1

I think including a default timeout for "waiting for headers" would be a good security patch. Does anybody agree? Or are there some cases where you might not want it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-feature Category: feature. This is adding a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants