-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Erasing secrets from memory (zero on drop) #11
Comments
After this answer to my question on the non-zeroing dynamic drop RFC, I've decided to start using a macro like :
which looks okay except maybe it needs some attribute due to being unstable. I suppose a stable version could be built with
And you could just write In particular this code demonstrates that fixed length arrays are not zeroed when dropped.
If secretgrind becomes mature enough then I'd imagine RFCs and code for appropriate attributes in rustc should go smoothly. I'd think the main issue would be ensuring that his call graph based stuff did not cause any problems for dynamic linking, not that rust does that very often. |
I assume memory leaks are only possible with trait objects? I'll go read that blog post.
I see now. You must explicitly opt-in to leaking resources with |
Thanks so much for this detailed write-up! |
Also, std/ptr/fn.write_volatile says 'Rust does not currently have a rigorously and formally defined memory model, so the precise semantics of what "volatile" means here is subject to change over time. That being said, the semantics will almost always end up pretty similar to C11's definition of volatile.' Another few ways to fail to If we're just trying to zero an array, then we're seemingly just worried about the data structures we insert the key material into. It appears any local stack usages should only leak on panic, as well as the scenarios explained by Laurent Simon. |
Memory leaks are possible safely with The whole
There were other issues which I don't quite recall. But mostly, it doesn't seem to be something which will happen if we propose it again (feel free to try though). I too was on the |
Also, I'm not sure if this line of reasoning is the correct one to follow. We already rely on the Rust typesystem to make it impossible to leak secrets. Reading uninitialized data is already UB. Relying on the same types system to furthermore guarantee Basically, for simple uses of a secret value, where it's visually verifiable that it's not being shared in such a way as to cause potential leaks, the leak-checking static analysis doesn't help much. For complicated cases where the static analysis would help, any bugs in unsafe code are likely to break enough to render the leak protection moot anyway. If your threat model doesn't involve broken unsafe code, you don't have to worry anyway since Rust guarantees no uninit reads if your unsafe code is sound. I understand having protection against this kind of thing built into a different level of the application, e.g. via a sanitizer on top of the existing safety static analysis, or by having extra instrumentation to clear stacks. I don't really see the value of more static analysis here, since in the cases where it matters it won't be robust to other unsafe code breaking down anyway. (I hope I got this point across well -- it's a bit nuanced and usually I'm all for static checks, just that in this case I don't see it adding anything) |
There are going to be attacks on Rust code through both calls to C libraries and mistakes in Anyone building a modern messaging application needs to hold Axolotl ratchet key material in non-trivial data structures and serialize it to disk. Pond uses go-protobuf for serialization, for example. Cap'n proto is far nicer than protobuf, but uses a strange arena allocator that can leave holes. Right now, I'm wondering if Imagine You could use this to find memory leaks more easily of course, maybe adding the |
Yeah, so I am considering broken unsafe code and broken C libraries in the threat model; I'm saying that the leak protection static analysis can provide will depend on the same invariants plus some extras and in the face of broken unsafe code, will likely be broken for nontrivial cases. |
Against prepared attackers sure, but I think just leaving key material around willy nilly could interact poorly with complex data structures to produce accidental disclosures. A few volatile writes costs little compared with the crypto itself. An interesting place to raise the static analysis ideas might be the Roadmap RFC because they were discussing interoperability with C++ and this ability to recheck all your dependencies in a slight variant of Rust might be relevant to stuff like on-demand monomorphization or whatever. |
You're not getting my point here 😄 (my fault, it's nuanced) Static leak protection won't protect against that. It will protect against the case where you trigger a memory leak in safe Rust. Triggering a memory leak in safe Rust is hard to do by accident. It being safe to do does not mean it's easy. You need to explicitly call Or, you need to have reference counted cycles in your application. If you application is such that you are in danger of RC cycles involving the key happening, you already have your key material being shared willy nilly and no amount of static analysis will protect you from that. Basically, the nontrivial/interesting cases that such analysis protects you from are situations where you have the problem regardless of how leaky it is. I bet a nicer analysis not involving (I don't really agree that this is relevant to the C++ bits of the roadmap) |
Ahh! I'd missread one statement you made as being an argument against zeroing |
Overwriting ("zeroizing") secrets in memory bounds the temporal interval in which they exist. You have to think outside Rust's sandbox: the threat is not only leaking secrets from within the process, but also someone from the outside attaching a debugger, the whole process memory being dumped (a core dump, a VM snapshot, suspend-to-disk), or even direct access to the memory hardware (cold boot attack). If all traces of the secret had been safely erased by then, you're safe. I proposed something related in rust-lang/rfcs#1496. As for preventing the write from being optimized away, there's a trick similar to what I did on my |
I am. I'm not arguing that zeroing secrets is a bad idea. I think it's a really good thing to do. I'm saying that I don't see any additional value in static analysis in Rust that ensures that values don't leak, in the context of an already-broken sandbox. |
Note that you can try to use |
As I understand it,
I think this say, if two cryptographic libraries attempt to Or.. You can write smallish code, dynamically link it somehow, and just call Anyways, I think |
@burdges I think you want to be the exclusive owner of your secret data anyway and not expose it to other applications/libraries. |
Interesting, I suppose any data structures that store vast numbers of keys, like per contact ratchet states, should be encrypted in memory then. And threads doing crypto might want a small |
Just created this related issue : rust-lang/rfcs#1850 |
Ideally, one should probably update tars to the new allocator traits or something. I've post a quick and dirty crate to do zero on drop the cheap way discussed here however : https://github.com/burdges/zerodrop-rs |
We'll see what people say about this idea : rust-lang/rfcs#1853 |
I would like to present a crate I wrote these last few days inspired by this discussion: https://crates.io/crates/clear_on_drop. Some of the ideas in it might be useful. |
Rust does not zero non-`Drop` types when it drops them. Avoid leaking these type as doing so obstructs zeroing them. In particular, if you are working with secret key material then - do not call `::std::mem::forget`, - do not unsafely zero types with owning pointers, - ensure your code cannot panic. - take care with `Weak`, and - examine the data structures you use for violations of these rules. See rust-lang/rfcs#320 (comment) and https://github.com/isislovecruft/curve25519-dalek/issues/11
Just to round this out -- we're currently using @cesarb's For stack allocations, I don't think there's much we can do, for two reasons:
|
Yesterday @burdges mentioned some ideas about how to try to erase secret data from memory, and some things about zero-on-drop behaviour in Rust. It would be good to have a reliable way to clear secret data.
If I understand correctly, just implementing the
Drop
trait to zero memory may not be sufficient, for two reasons:the write may be optimized away (see also);
drop()
may never be called, because Rust's memory model allows memory leaks: "memory unsafety is doing something with invalid data, a memory leak is not doing something with valid data"Other notes that may be of interest: this morning at RWC 2017, Laurent Simon (@lmrs2 ?) presented secretgrind.
It could be quite convenient if Rust had a
#[secret_stack]
function annotation that guaranteed stack erasure, but this would require a language change.The text was updated successfully, but these errors were encountered: