-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(OTS) Zstd Bomb Blob #876
Comments
Should we impose some constant size limit on IR blobs, or should it be a chain parameter? |
Yes, but we also want to change the order of operations s.t. we are checking signatures on compressed blobs. That way we know honest signers are involved or not. Even with a supposed limit on IR blobs, we can still get bombed. But, later on, we can perhaps entertaining slashing signers that bomb. Ideally, we would also want to just have a way to catch bombing behavior and ignore said blob. For example, I think you can use |
@khokho, can you share over your bomb s.t. we can write tests against it. |
@l-monninger Here's the PoC:
|
To me it boils down to two approaches:
|
@mzabaluev Yeah, that's what I would say. |
We have the maximum block size parameter in memseq configuration, but it's in the number of transactions.
|
We could assume some like Celestia bytes Yes, we should be able to apply it with blobs received via |
I assume this really means |
By plugging zstd decompressing reader directly into the bcs decoder in streaming fashion, we can limit the allocation requirements to the 2 GB limit enforced internally by bcs. |
This has to be multiplied by the number of byte array fields in the data structure, so it's 4 x 2 GiB. Combined with other memory use, this is pushing on operating budgets. So while the streaming fix defends against worst attacks, a proper solution would need the format change. |
As we're realizing perhaps too late, three of these fields have no reason to be variable-length byte arrays, unless we want to designate the format of |
Currently, we still have the ability to change the DA on all of our networks, so I would just make this change. |
Use a crafted zstd payload as submitted in #876 (comment)
After discussion with @l-monninger, the data blob size limit of 2 GiB, while manageable on most machines that are expected to run a DA light node, still allows submitting blobs that would cause intolerable latency when decompressed and decoded. |
Hello, I checked out the bug fix and it looks good. I also reviewed the BCS changes, and it’s great that they’re optional and shouldn’t affect existing code. |
Summary
A malicious user may post a zstd bomb (a small compressed blob which decompresses to a huge buffer) as a blob on the movement Celestia namespace. Even though blob data is checked to be signed, zstd decompression happens before the signature checks. This is done with the zstd::decode_all(blob.data) function, which has no limits on the decompressed size of the blob. Even with Celestia’s 2 MB blob size limit, we were able to produce a PoC that uses approximately 100 GB of RAM on decompression.
The text was updated successfully, but these errors were encountered: