-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit size of post requests #6542
Comments
What would that mean beyond doing this? export function handle({ event, resolve }) {
const length = +(event.request.headers.get('content-length') ?? '0');
if (length > MAX_SIZE) {
throw error(413);
}
return resolve(event);
} Could also reject POST requests without a content-length, but you get the idea. (I'm assuming — haven't tested this — that it's not possible for the requester to lie about the content-length, i.e. declare a short content-length but then have a huge payload. I assume One wrinkle is that if you wanted to do this on a per-route basis, we would need to change the proposed actions API: export const actions = {
/** @type {import('./$types').Action} */
- uploadPhoto: ({ fields, files }) => {
+ uploadPhoto: ({ request }) => {
+ const length = +event.request.headers.get('content-length');
+ if (length > MAX_SIZE) throw error(413);
+ const { fields, files } = await formData(request);
const title = fields.get('title');
const image = files.get('image');
// ...
}
} Perhaps that's a good thing though. It would mean we could specify |
I find it weird to solve this on framework level... anyway if we are going more far, we actually don't need to deny bigger request, than do streaming, and if there is too many request from other users to handle, we could send header (timeout). that will make client wait for specific amount of time, until it resume in sending more data. And if we go even more far, we can do load balancing on level of endpoints and users, somewhat like what K8s does, but not with making more instancies of server and distributing request between them, but instead distributing request between endpoints in "streaming" way. Still this doesn't actually solve memory problem, but it solves unresponsivity of server, as if in case of some specific endpoints taking too much memory, we could terminate instance of it and send the "timeout" or some error to client. Anyway those stuff are more typically done on level of proxy, runtime, container or systems build on top of containers like K8s. |
@Mlocik97 my concern here isn't too much usage, but malicious users. If users discover they can make a 10GB post request to crash any SvelteKit server then that could be abused @Rich-Harris yeah, it's possible you could solve this in |
Did some tests and confirmed that a) Which means that other than the addendum to the forms proposal, there's no breaking change required here, so I'll remove that label |
Simply checking the
It is important to know that a server is allowed to reject requests without a
I guess all of this boils down to what features are wanted in SvelteKit. If you choose to offer a way to limit the size of As it currently stands with @Rich-Harris's proposal one would have to manually add some code in every tldr:
|
Can't we have per endpoint something like: export const POST = {
event: (event) => {
// logic
},
config: (config) => { ...config, limit: 1024 } // size in kb, 1024 means 1MB limit for upload size
} I know it's ugly (as API), I was more thinking about approach, aka being able to configure limit right in HTTP methods (or resp. actions in case of +page.server file) |
Browser behaviour is a subset of what RFC 9110 allows - I'm pretty sure all browsers will always include |
I too believe that browsers always include |
If we're worried about malicious users, then considering browser behavior alone is not sufficient. A malicious user could use any client they wish including one that is not spec compliant. |
If an app chooses to accept a request with
If the request size exceeds the stated |
I was worrying what might happen if a malicious client sent no
|
Actually, we shouldn't be erroring on undefined If we include this in the framework, for the option name, I'd propose If we want the user to do this they basically have to disallow undefined For reference, here's the Next.js implementation: https://github.com/vercel/next.js/blob/6f352357fb432b954930d6aeae890b5e0fd2e6aa/packages/next/server/api-utils/node.ts#L141 |
surely the purpose is to not buffer excessive requests in the first place?
same way we handle CSRF — we have a sensible default, and if you want custom handling, you can easily implement it yourself in |
I was thinking that if you call these methods we know you're not streaming, which means
Okay, so streaming will be disabled by default. Sounds like a reasonable enough solution for now. I think longer-term we might be able to use the other proposed solution, but I'm not going to be able to implement that before tomorrow |
Hmmm, well I probably should have realized this earlier, but it's the adapter that calls |
As a workaround. /**
* @param {number} length
*/
function limitStream(length) {
let size = 0;
return new TransformStream({
transform(chunk, controller) {
size += chunk.byteLength;
if (size > length) {
controller.error(new Error(`Received body size exceeded allowed: ${length} bytes`));
controller.terminate();
return;
}
controller.enqueue(chunk);
}
});
}
/**
* @param {Request} request
* @param {number} body_size_limit
*/
export function limit(request, body_size_limit) {
const content_length = Number(request.headers.get('content-length'));
if (content_length && content_length > body_size_limit) {
throw new Error(
`Received content-length of ${content_length}, but only accept up to ${body_size_limit} bytes.`
);
}
return new Request(request.url, {
method: request.method,
headers: request.headers,
body: request.body?.pipeThrough(limitStream(body_size_limit)),
});
}
/** @type {import('./$types').Actions} */
export const actions = {
async default({ request }) {
const data = await limit(request, 512 * 1024).formData();
//...
}
} |
Describe the problem
I believe a malicious user could bring down the server by issuing a very large post request. I think requests get buffered into memory, so you could exhaust the server memory.
Describe the proposed solution
Next.js has a default 1mb limit: https://nextjs.org/docs/api-routes/request-helpers#custom-config
Express has a default 100kb limit: https://github.com/expressjs/body-parser#limit-3
Play Framework has an in memory limit as well as a disk limit:
https://www.playframework.com/documentation/2.8.x/JavaBodyParsers
I'm not sure if a server-wide limit is enough or if we also need to be able to set it on a per-endpoint basis or some other granularity.
It should be configurable
Alternatives considered
No response
Importance
nice to have
Additional Information
No response
The text was updated successfully, but these errors were encountered: