Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is/is not a "vulnerability"/"security issue"? #18

Closed
sam-github opened this issue Jan 13, 2017 · 10 comments
Closed

What is/is not a "vulnerability"/"security issue"? #18

sam-github opened this issue Jan 13, 2017 · 10 comments

Comments

@sam-github
Copy link
Contributor

Security researchers have requested that we make an attempt at documenting this.

@sam-github
Copy link
Contributor Author

To kick it off:

  • direct use of C++ bindings - not an issue?
  • use of documented APIs that can access the host system (like fs) are not a security issue. may need to be called out because filesystem access without user's permission in some javascript environments, like browsers, would be a vulnerability

@mcollina
Copy link
Member

  • direct use of C++ bindings - not an issue?

Generally no. It's not a good practice, because it is harder to validate data in C++ rather than in JS. It might be possible to cause aborts and crashes if the data is not validated correctly.

  • use of documented APIs that can access the host system (like fs) are not a security issue. may need to be called out because filesystem access without user's permission in some javascript environments, like browsers, would be a vulnerability

The use of fs without validating that the external user can access the data that is requested. The use of relative paths can allow an attacker to fetch/write unwanted files.

@vdeturckheim
Copy link
Member

I tried to put up a definition this week end. The goal was to have something that fit the possible scope of the WG:

Any technical bypass that is not directly a mistake in the implementation of business logic and that could result in an unauthorized entity to gain access or control on a system.

@sam-github
Copy link
Contributor Author

re:

Security researchers have requested that we make an attempt at documenting this.

From https://github.com/nodejs/security-wg/blob/master/meetings/2016-12-22.md

  • Deian: Fraser and I have disclosed some vulns to core, and want a description
    of what is a sec vuln, and not just a bug. Describing this may be difficult,
    but without a threat model, its hard to communicate.

@deian Comments on the suggestions above? What kinds of thing do you want to be clearer? Alternatively, what questions did you have?

@deian
Copy link
Member

deian commented Feb 21, 2017

I think @vdeturckheim's definition is getting closer. I'll add to/refine this more later as I find cycles, but for now here are some of my thoughts:

To me, it seems like type safety, memory, and crash safety are the first properties to try to strive for. JavaScript (out of the box) "provides" this. As such, modifying or extending the JavaScript environment should not introduce new unintended security/safety issues. (Browsers do this + other security policies. We should aim for the former and then try the latter.)

Unfortunately, this should include the binding code or code that is otherwise not intended to be "public". There are ways to truly hide unsafe APIs, but that's not done today. (For example, the Chrome extension system does this, so it's doable, but that's not something Node.js can be easily retrofitted to.)

In general, any exposed function (or object when used as an argument to a function) that can be used to do something its intended specification (even if the spec is just English) did not allow, should be considered a potential safety/security problem IMO.

Being able to use Buffer.from to read arbitrary parts of memory is one such example. But, bugs in other seemingly security-unrelated libraries matter too. Unlike browsers, we don't have a policy we can go to and say this is what should be enforced. So, while clear bugs like an HTTP parser implementation that allows arbitrary code execution (or for the exhaustion of memory) are obvious vulnerabilities, I think it's important to err on the side of “this may have security implications” -- people are building apps whose security logic may depend on the correctness of the Node.js APIs and, as far as I know, there are no metrics on how people actually use the APIs.

I don’t think we need to be overly conservative, but there are some things like memory, type, and crash safety that IMO should be considered security bugs. Similarly, bugs in APIs we know to be security-related (e.g., crypto, tls, etc.) should similarly be flagged as such.

@sam-github
Copy link
Contributor Author

Unfortunately, this should include the binding code or code that is otherwise not intended to be "public".

This suggests the attack surface is the Node.js API, but that can't be the case, more damage can be done with execSync("rm -rf /") than can be done with process.binding("dgram").blah.

Languages that allow access to the runtime can be manipulated in surprising ways: monkey-patching of object, for example. I can see the point that while that patching can cause node/python/ruby to crash, in python/ruby its harder to write code that causes a segfault, but I don't see how one is more or less exploitable.

Buffer.from() is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).

That said, I'm not personally opposed to making the binding layer fail harder and faster when misused. I assume adding ASSERT() checks for binding-layer misuse would be a satisfactory way of ensuring it cannot be used outside its intended specification?

There has been talk of hiding the bindings, and also of making them not get exposed without a flag, that might be worth raising again. I can't recall off the top of my head the reasons they are being directly used by code outside of node's internal js libraries.

there are no metrics on how people actually use the APIs.

Not for private code, but we do have tools to crawl npmjs.org's packages and look for instances of API use.

@deian
Copy link
Member

deian commented Feb 21, 2017

This suggests the attack surface is the Node.js API, but that can't be the case, more damage can be done with execSync("rm -rf /") than can be done with process.binding("dgram").blah.

Sorry, what I was trying to get at is that you can't not include the private APIs in your attack surface if they are not actually hidden.

Languages that allow access to the runtime can be manipulated in surprising ways: monkey-patching of object, for example. I can see the point that while that patching can cause node/python/ruby to crash, in python/ruby its harder to write code that causes a segfault, but I don't see how one is more or less exploitable.

There are a bunch of language-level security techniques that we can use to isolate/sandbox/confine somewhat untrusted code (caja, djs, cowl, etc.). Most of these are useless if you can read/write arbitrary parts of memory.

Buffer.from() is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).

I don't completely follow the second part, mind clarifying?

That said, I'm not personally opposed to making the binding layer fail harder and faster when misused. I assume adding ASSERT() checks for binding-layer misuse would be a satisfactory way of ensuring it cannot be used outside its intended specification?

A hard, deliberate crash is better than a crash due to null ptr dereferencing or OOB access. I don't think this is the right thing to do, always. Definitely do it when you need to bail out, but I think you should be throwing a JS exception if you can. Particularly because a hard crash can more easily take down a server than language-level exceptions. Now, if the hiding of bindings is done as you suggest and the JS code that calls the binding layer does proper sanitization, then ASSERTs sound great!

There has been talk of hiding the bindings, and also of making them not get exposed without a flag, that might be worth raising again. I can't recall off the top of my head the reasons they are being directly used by code outside of node's internal js libraries.

I think this is a great idea!

@sam-github
Copy link
Contributor Author

sam-github commented Mar 2, 2017

Sorry, what I was trying to get at is that you can't not include the private APIs in your attack surface if they are not actually hidden.

Why should they be in our attack surface if they can only be used by legimate code authors? Doesn't that imply our attackers are the authors of the js code node is intentionally being run on? Those authors can do anything. I don't understand your threat model, can you clarify?

Buffer.from() is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).

Buffer.from() is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).

I don't completely follow the second part, mind clarifying?

Buffer.from is a documented API, with dangerous side-effects.

process.binding() is explicitly "use at your own risk". You use it, you risk it.

@deian

@sam-github
Copy link
Contributor Author

I can PR example text into the security-wg, and once we agree either the text or a link to the text can be put in the Security section of the main nodejs/node README.

@vdeturckheim
Copy link
Member

@sam-github lgtm. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants