-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] feat: Ability to limit request size and connection count using HTTPServerConfigurationHandler. #222
Conversation
83523b5
to
765ef49
Compare
random drive-by comment: If preferred, SwiftNIO also supports not actually accepting more connections than a certain limit. Currently, this PR accepts the connection and then sends a HTTP error response back. That's totally cool if that's what you want. I just wanted to point out that it is also possible for SwiftNIO to not accept more connections than wanted. If you would like to go down that path, please feel free to reach out and I can run you through what's necessary to implement that. |
765ef49
to
204d34c
Compare
204d34c
to
1ce626b
Compare
@weissi, Kitura Net has functionality of accepting the connection and then sending a HTTP error response back. To maintain the parity between Kitura Net and Kitura-NIO, This PR is accepts the connection and then sends a HTTP error response back. But I would like to explore the option where SwiftNIO does not accept more connections than wanted. I will ping you on slack regarding this. |
let config: NIOHTTPServerUpgradeConfiguration = (upgraders: upgraders, completionHandler: { _ in | ||
_ = channel.pipeline.removeHandler(httpHandler) | ||
}) | ||
return channel.pipeline.configureHTTPServerPipeline(withServerUpgrade: config, withErrorHandling: true).flatMap { | ||
if let nioSSLServerHandler = self.createNIOSSLServerHandler() { | ||
_ = channel.pipeline.addHandler(nioSSLServerHandler, position: .first) | ||
} | ||
_ = channel.pipeline.addHandler(serverConfigurationHandler, position: .first) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all the lines that start with _ = channel.pipeline
here aren't quite right. You're essentially racing the adding/removing of the handlers which isn't right. Basically you have two options:
- chain the futures (using
channel.pipeline.addHandler(MyHandler()).flatMap { channel.pipeline.addHandler(MySecondHandler() } .flatMap { channel.pipeline.addHandler(MyThirdHandler() }
- prepare an array of handlers and add them in one shot using
addHandlers()
.
So here for example:
var allMyHandlers: [ChannelHandler] = []
if let nioSSLServerHandler = self.createNIOSSLServerHandler() {
allMyHandlers.append(nioSSLServerHandler)
}
allMyHandlers.append(serverConfigurationHandler)
allMyHandlers.append(httpHandler)
return channel.pipeline.addHandlers(allMyHandlers)
Generally, whenever you need to use _ =
with SwiftNIO, you either found a bug (please report) or something went wrong. We tried fairly hard to mark all results that generally make sense to be ignored with @discardableResult
. The result of addHandler
is definitely not discardable.
} | ||
|
||
public func channelReadComplete(context: ChannelHandlerContext){ | ||
requestSize = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this almost certainly does not do what is intended here. channelReadComplete
is invoked at the end of a 'read burst' not at the end of a message. You want to reset requestSize
when you receive .end
in channelRead
.
Let's imagine the client sends us enough data for about three TCP packets in quick succession. When the first one is received, NIO will be able to read some data and will invoke channelRead
with the data. NIO however will speculatively try to read more just in case there's more data arriving from the network. In our example here the second of the three packets might have arrived so NIO can indeed read more and invoke channelRead
again. Same for the third packet. After receiving that, NIO again speculatively tries to read more, now there's nothing more available, NIO will signal the end of the read burst by invoking channelReadComplete
to signal to you that at this very point in time, we're done with reading.
You can however not reset requestSize
to 0
in channelReadComplete
. A client that does something like "send a few kilobytes; wait 100 ms; send a few kilobytes; wait 100 ms; repeat" could then send you arbitrary amounts of data because channelReadComplete
will be invoked. See also the docs on channelReadComplete
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be fairly easy to reproduce in a unit test which I would recommend writing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally you test just HTTPServerConfigurationHandler
in isolation using EmbeddedChannel
where you are in full control of the events sent. If you need examples, please reach out, @ianpartridge and @pushkarnk might also be able to help.
} | ||
|
||
public func channelActive(context: ChannelHandlerContext) { | ||
_ = server.connectionCount.add(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to restrict "connections" or "currently active requests"? With HTTP/1.1 keep-alive those two are different. You're implementing limiting "connections", is that what you want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your feedback @weissi. For now I am implementing limiting "connections". But may be in future I will look into limiting "currently active requests".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The decision to limit the number of connections here was largely based on what could be easily implemented in Kitura-net. Limiting active requests is more attractive, but might take some extra work to implement on -net (when waking up a reader thread, we'd need to check if we were allowed to process another request, and have a way to defer the read(s) until later).
Though this doesn't necessarily preclude us from implementing it in Kitura-NIO - the configuration would need to be consistent across the two, but we could mark it as unsupported on -net.
864814c
to
2613958
Compare
} | ||
return channel.pipeline.addHandler(httpHandler) | ||
} | ||
return channel.eventLoop.makeSucceededFuture(()) } .flatMap { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can just drop the return channel.eventLoop.makeSucceededFuture(()) } .flatMap {
} | ||
|
||
public func triggerUserOutboundEvent(context: ChannelHandlerContext, event: Any, promise: EventLoopPromise<Void>?) { | ||
requestSize = event as? Int ?? 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RudraniW there are a number of issues here:
- the
event
type should be a type private to you - if the event is of any other type, you should forward it on as you can't expect that you're the only one sending outbound user events (same for inbounds)
But really you're making your life unnecessarily hard. You should be doing the request body size limiting after the HTTP parser. Then the whole limiting is pretty straightforward. As an example, see the handler below:
final class HTTPRequestBodySizeLimiter: ChannelInboundHandler {
typealias InboundIn = HTTPServerRequestPart
typealias InboundOut = HTTPServerRequestPart
private var bytesReceivedSoFar: Int
private let maximumAllowedBytes: Int
init() {
self.bytesReceivedSoFar = 0
self.maximumAllowedBytes = 1024 * 1024
}
func channelRead(context: ChannelHandlerContext, data: NIOAny) {
switch self.unwrapInboundIn(data) {
case .head:
// at this point in time we must have received 0 bytes
assert(self.bytesReceivedSoFar == 0)
case .body(let newBytes):
guard self.bytesReceivedSoFar + newBytes.readableBytes <= self.maximumAllowedBytes else {
// for this example, let's just close.
context.close(promise: nil)
return
}
self.bytesReceivedSoFar += newBytes.readableBytes
case .end:
// let's reset the bytes received so far
self.bytesReceivedSoFar = 0
}
// let's forward whatever we have received
context.fireChannelRead(data)
}
}
The way you would integrate that is
channel.pipeline.configureHTTPServerPipeline(...).flatMap {
channel.pipeline.addHandler(HTTPRequestBodySizeLimiter(...))
}.flatMap {
channel.pipeline.addHandler(httpHandler)
}
Does this make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi The objective here was to prevent oversized requests from exhausting the memory on the server. The reason I suggested the approach of limiting before HTTP header parsing is that a misbehaving client could (deliberately or otherwise) send arbitrary amounts of data as HTTP headers.
I believe that many HTTP servers have a separate limit on the size of the HTTP headers (Apache for example), but Kitura does not (at least, not via Kitura-net). Does Swift-NIO provide a way to limit the size of the headers it accepts? If so, then imposing separate limits on headers and body would be a superior approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djones6 SwiftNIO does have a non-configurable 80kiB limit on the total size of all headers.
If you can find anything where SwiftNIO buffers indefinite amounts of anything in memory, please contact us and we will treat it like a security issue.
There's no need to limit anything in front of NIO's HTTP parser or any other parsers.
To make it easy for your own parsers not to grow infinitely in memory, NIO's ByteToMessageHandler
supports a maxBufferSize
so you as the user can configure any ByteToMessageDecoder
to not consume more than a fixed amount of buffer space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RudraniW I think the solution here should probably be two limits: the request header size and the body size (we'd need to implement this in Kitura-net, but that should be a relatively straightforward change on top of the existing prototype).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djones6 you mean two limits for Kitura-net or Kitura-NIO? Because I think for Kitura-NIO you only need to limit what you buffer (the body). Pretty much every web browser/server has a limit and 80 kiB seems large enough for everybody & small enough that it doesn't matter (we took the same value as NodeJS/nginx because it's in http_parser). It doesn't make sense to limit the header block even further because 80 kiB is smaller than most kernel's receive buffers. So even if you limit it in user-space it's in memory already anyway and there's nothing you can do about it. For Kitura-net I'd suggest also just picking an arbitrary value of something like 80kiB, don't think it needs to be configurable tbh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djones6 btw, there are other things in HTTP/1 that need limiting: the URL and the status description. In NIO we count that towards the header size. So we treat all of those the same:
GET /veeeeeeeeerrrrryyyy_long_path_that_might_be_multiple_gigabytes_long [...]
GET /short/path HTTP/1.1
but: veeeerrryyyyy-long
or: very
many: many
many-many: headers
or if we parse responses there can also be
HTTP/1.1 200 okaayyyyyyyyy-with-manyyyy-bytes-potentially-multiple-gigabytes
So more precisely, NIO's limit is everything until we see \r\n\r\n
that needs to be maintained must fit in 80 kiB. Everything after that we stream to you. Oh, and for the trailers the same 80 kiB limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'll make sense then for Kitura-net to match NIO's 80kb default for headers. That's also how I would implement it for Kitura-net, which uses CHTTPParser, so we'd impose a header read limit up to the point where the parser indicates that headers are complete (method, path, headers), after which we'd impose the body limit on anything after that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djones6 cool, if you use CHTTPParser
(the one from Nods.js), then you'll have the 80kb limit by default (unless you changed it).
Description
This PR allows a requestSizeLimit (maximum bytes for a request, including the header size) and connectionLimit (total number of concurrent connections) to be configured when registering a server. Here a new duplex handler
HTTPServerConfigurationHandler
is used to check request size and connection count. This handler is placed before NIOSSL handler.Example usage:
let serverConfig = HTTPServerConfiguration(requestSizeLimit: 10000, connectionLimit: 100)
If you do not specify either of the parameters, or do not specify aHTTPServerConfiguration at all, then default values are used. The defaults are defined in
HTTPServerConfiguration
in Kitura-NIO here: https://github.com/RudraniW/Kitura-NIO/blob/limitRequestSize/Sources/KituraNet/HTTP/HTTPServerConfiguration.swiftMotivation and Context
see: #206