-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is/is not a "vulnerability"/"security issue"? #18
Comments
To kick it off:
|
Generally no. It's not a good practice, because it is harder to validate data in C++ rather than in JS. It might be possible to cause aborts and crashes if the data is not validated correctly.
The use of |
I tried to put up a definition this week end. The goal was to have something that fit the possible scope of the WG:
|
re:
From https://github.com/nodejs/security-wg/blob/master/meetings/2016-12-22.md
@deian Comments on the suggestions above? What kinds of thing do you want to be clearer? Alternatively, what questions did you have? |
I think @vdeturckheim's definition is getting closer. I'll add to/refine this more later as I find cycles, but for now here are some of my thoughts: To me, it seems like type safety, memory, and crash safety are the first properties to try to strive for. JavaScript (out of the box) "provides" this. As such, modifying or extending the JavaScript environment should not introduce new unintended security/safety issues. (Browsers do this + other security policies. We should aim for the former and then try the latter.) Unfortunately, this should include the binding code or code that is otherwise not intended to be "public". There are ways to truly hide unsafe APIs, but that's not done today. (For example, the Chrome extension system does this, so it's doable, but that's not something Node.js can be easily retrofitted to.) In general, any exposed function (or object when used as an argument to a function) that can be used to do something its intended specification (even if the spec is just English) did not allow, should be considered a potential safety/security problem IMO. Being able to use Buffer.from to read arbitrary parts of memory is one such example. But, bugs in other seemingly security-unrelated libraries matter too. Unlike browsers, we don't have a policy we can go to and say this is what should be enforced. So, while clear bugs like an HTTP parser implementation that allows arbitrary code execution (or for the exhaustion of memory) are obvious vulnerabilities, I think it's important to err on the side of “this may have security implications” -- people are building apps whose security logic may depend on the correctness of the Node.js APIs and, as far as I know, there are no metrics on how people actually use the APIs. I don’t think we need to be overly conservative, but there are some things like memory, type, and crash safety that IMO should be considered security bugs. Similarly, bugs in APIs we know to be security-related (e.g., crypto, tls, etc.) should similarly be flagged as such. |
This suggests the attack surface is the Node.js API, but that can't be the case, more damage can be done with Languages that allow access to the runtime can be manipulated in surprising ways: monkey-patching of object, for example. I can see the point that while that patching can cause node/python/ruby to crash, in python/ruby its harder to write code that causes a segfault, but I don't see how one is more or less exploitable.
That said, I'm not personally opposed to making the binding layer fail harder and faster when misused. I assume adding There has been talk of hiding the bindings, and also of making them not get exposed without a flag, that might be worth raising again. I can't recall off the top of my head the reasons they are being directly used by code outside of node's internal js libraries.
Not for private code, but we do have tools to crawl npmjs.org's packages and look for instances of API use. |
Sorry, what I was trying to get at is that you can't not include the private APIs in your attack surface if they are not actually hidden.
There are a bunch of language-level security techniques that we can use to isolate/sandbox/confine somewhat untrusted code (caja, djs, cowl, etc.). Most of these are useless if you can read/write arbitrary parts of memory.
I don't completely follow the second part, mind clarifying?
A hard, deliberate crash is better than a crash due to null ptr dereferencing or OOB access. I don't think this is the right thing to do, always. Definitely do it when you need to bail out, but I think you should be throwing a JS exception if you can. Particularly because a hard crash can more easily take down a server than language-level exceptions. Now, if the hiding of bindings is done as you suggest and the JS code that calls the binding layer does proper sanitization, then ASSERTs sound great!
I think this is a great idea! |
Why should they be in our attack surface if they can only be used by legimate code authors? Doesn't that imply our attackers are the authors of the js code node is intentionally being run on? Those authors can do anything. I don't understand your threat model, can you clarify?
Buffer.from is a documented API, with dangerous side-effects. process.binding() is explicitly "use at your own risk". You use it, you risk it. |
I can PR example text into the security-wg, and once we agree either the text or a link to the text can be put in the Security section of the main nodejs/node README. |
@sam-github lgtm. Thanks a lot! |
Security researchers have requested that we make an attempt at documenting this.
The text was updated successfully, but these errors were encountered: