Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to limit incoming request size (Kitura#1384) #206

Closed
pushkarnk opened this issue May 22, 2019 · 9 comments
Closed

Ability to limit incoming request size (Kitura#1384) #206

pushkarnk opened this issue May 22, 2019 · 9 comments
Assignees
Milestone

Comments

@pushkarnk
Copy link
Contributor

Copied from Kitura/Kitura#1384

Kitura currently does not have the concept of a limit on incoming request size. It is possible to destabilize a server by sending (accidentally or maliciously) a large request payload which either impairs performance, or exhausts available memory.

A limit should be configurable on a server which causes requests above that size to be rejected, and the connection closed.

@weissi
Copy link

weissi commented May 22, 2019

Ouch! Instead of limiting the incoming request size (which usually causes trouble when doing file uploads (ask Vapor for example)), I'd suggest implementing streaming with back-pressure. That way the maximum amount of memory can remain constant and yet you can support arbitrarily large file uploads. Of course a user of Kitura should be able to control what's too much. So in some JSON API, maybe 10 MB might be a maximum but for another route that accepts file uploads there shouldn't be a maximum because files can be streamed straight to disk (possibly policing the quota for the logged in user of course).

@RudraniW
Copy link
Contributor

  • I have implemented the function to calculate size of the header.
  • I have implemented the function which will send the status code and status description when request size exceeds the default maximum request size.
  • Now I am configuring the HTTP server to set maximum request size and connections.

@pushkarnk
Copy link
Contributor Author

pushkarnk commented Jul 11, 2019

@djones6 We did consider writing a new channel handler to track the number of raw bytes flowing into a channel. However, if that particular connection is supporting pipelining/keep-alive I don't think there's any way to mark HTTP request boundaries on a raw byte stream. Data pertaining to multiple HTTP requests could be flowing in sequentially. The HTTP message boundaries are drawn only after the HTTPDecoder converts a chunk of bytes to an HTTPRequestHead and a body (ByteBuffer). So, on such a connection how does one calculate the size of one HTTP message without having an HTTP decoder parse the raw bytes ?

@pushkarnk
Copy link
Contributor Author

pushkarnk commented Jul 11, 2019

Ouch! Instead of limiting the incoming request size (which usually causes trouble when doing file uploads (ask Vapor for example)), I'd suggest implementing streaming with back-pressure. That way the maximum amount of memory can remain constant and yet you can support arbitrarily large file uploads. Of course a user of Kitura should be able to control what's too much. So in some JSON API, maybe 10 MB might be a maximum but for another route that accepts file uploads there shouldn't be a maximum because files can be streamed straight to disk (possibly policing the quota for the logged in user of course).

Thanks for this input @weissi ! Is there an example in swift-nio for a streaming handler with backpressure? Based on my current understanding of it, I am not sure if we could fit such a handler in the present Kitura implementation, given that we invoke a route handler only after the entire HTTP request has been read and processed.

@weissi
Copy link

weissi commented Jul 11, 2019

We have a bunch of examples scattered around and I'm happy to point you to them. But right now, you're right: If Kitura expects the entire body when invoking the route handler, then you won't be able to do back-pressure because you need the whole thing in memory before you can even start processing something. Back-pressure is about slowing down the producer (eg. client sending the request body) to be no faster than the processor (the Kitura implementation of a given route). Unfortunately, if the processor doesn't do streaming, there's no point in slowing down the producer...

So right now I think there are two options:

  1. PHP-esque 'maximum request body size'
  2. Make Kitura work on streams of request bodies rather than an already available in-memory request body.

@ianpartridge
Copy link
Contributor

I fear option 2 is not feasible while we have to maintain the same protocol ServerRequest between Kitura-net and Kitura-NIO. Consequences of decisions made several years ago by people no longer around 😀

@weissi
Copy link

weissi commented Jul 11, 2019

I fear option 2 is not feasible while we have to maintain the same protocol ServerRequest between Kitura-net and Kitura-NIO. Consequences of decisions made several years ago by people no longer around 😀

Makes sense! For Kitura 3 then :)

@RudraniW
Copy link
Contributor

  • I have added a new test testRequestSize in ClientE2ETests to check if the request size exceeds given maximum request size.
  • Now I am working on limiting maximum number of concurrent connections.

@pushkarnk
Copy link
Contributor Author

Merged. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants