Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Permission Model #791

Closed
RafaelGSS opened this issue Apr 20, 2022 · 60 comments
Closed

Permission Model #791

RafaelGSS opened this issue Apr 20, 2022 · 60 comments

Comments

@RafaelGSS
Copy link
Member

Permission Model initial issue

Hello everybody!

Following up on the Security Model initiative and the Mini summit (Next-10) in April seems and consensus that Node.js aims to have a security permission system, thus, avoiding third-party libraries to access machine resources without user consent.

This system was previously researched by James Snell and Anna Henningsen[1], which resulted in an excellent material as a starting point.

For context, the material is available through those links:

Constraints

This security model is not bulletproof, which means, there are constraints we agree on before implementing this system:

  • It’s not a sandbox, we assume the user trusts in the running code
    • Example: the user sets the flag --allow-fs-read without specifying a scope. Any external library can make use of it leading to a potential exploit. In this case, with the user's consent.
  • No break-changes are ideal.
  • It must add a low/no overhead when disabled and low overhead when enabled.

Points to be discussed

This is a big topic that could lead to several discussions topics. In order to avoid unnecessary discussions at this moment, let's use the following 3 topics as boundaries for this first iteration.

  1. Should it be process or module scoped?

The effort to make it work with modules is worth it? Example:

{
  "name": "package-name",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "permissions": [
    "fs"
  ]
}

Then the user should provide permission for each module (as you need to do in your smartphone apps)

  1. Should the user be able to change the permissions in runtime?

If yes, how does it behave in an Asynchronous Context? Example:

const { setTimeout } = require('timers/promises');

console.log(process.policy.check('net')); // true

process.nextTick(() => {
  process.policy.deny('net');
  setTimeout(1000).then(() => {
    console.log(process.policy.check('net')); // false
  });
});

process.nextTick(() => {
  setTimeout(1000).then(() => {
    console.log(process.policy.check('net')); // ???
  });
});
  1. What would be the desired granularity level?
  • --policy-deny=fs or --policy-deny=fs.in ?
  • --policy-deny=net or --policy-deny=net.tcp.in ?
  • Should the user be capable to restrict specific TCP ports, for instance, denying all inbound packages at port 3000?
  • Is Deno granularity enough? --allow-read --allow-net

cc/ @jasnell @Qard @mcollina @mhdawson

@vdeturckheim
Copy link
Member

Thanks for opening the topic - for questions 1 and 2, I have two concerns on the direction on the answers:

  1. Should it be process or module scoped?

tbh, I am more scared by the performance hit or feasibility of a module-scoped approach. Do we have guidelines on how to reach that?

  1. Should the user be able to change the permissions in runtime?

If so, we need to find a way to make sure potential malicious code does not rewrite policies to its own interests at runtime.

@RafaelGSS
Copy link
Member Author

tbh, I am more scared by the performance hit or feasibility of a module-scoped approach. Do we have guidelines on how to reach that?

Yes, being module-scoped certainly would add such as big complexity. I'm not sure about the performance impacts, though. I think is feasible to look at a possible implementation and see if it's, at least, reasonable to do in the core.

If so, we need to find a way to make sure potential malicious code does not rewrite policies to its own interests at runtime.

Yes, good point.


Maybe we can start with an MVP and then start iterating over it when the need arises. What do you think? I mean, I can start an MVP with:

  • Process scoped
  • No API to change permissions in runtime
  • Broad granularity level (same as Deno does)

And then, along this discussion, I can see the viability of the further updates

@UlisesGascon
Copy link
Member

Thanks for the thread @RafaelGSS! 🙌

Should it be process or module scoped?

I suggest process model. We keep a simpler scope for the MVP.

There are some complex scenarios if we want to support modules, as seems hard deal with a big dependency tree like:

My projects dependes on package-a and package-b. I want that library-a can modify files but the opposite for package-b, then my settings for modules are:

[
  {
    "name": "package-a",
    "permissions": [ "fs" ]
  },
  {
    "name": "package-b",
    "permissions": null
  }
]

But then package-a and package-b depends on package-c, how does the modules policy works in this scenario ? Because from package-a should be fine to use fs but not from package-b

Should the user be able to change the permissions in runtime?

I agree with @vdeturckheim malicious packages can do modifications. In the other hand we might need that flexibility in some cases, so... maybe we can assume that by default is not possible to modify the policies without restarting the application at least you specifically allow that behaviour (and assume the risk) by using an specific flag like --allow-policy-changes

What would be the desired granularity level?

+1 to Deno model (at least for the first iteration) and then find if we really have a solid feedback to add more complex features.

@jasnell
Copy link
Member

jasnell commented Apr 20, 2022

For 1, I think the process-scoped is the only viable answer. Module-scoped is going to be far too complex and disruptive to do, and there's really no clear must have use case that would justify it.

For 2, I would argue that a model where permissions can only be restricted at runtime would be appropriate. That is, the process starts with a given set of permissions, within the code, there are options for restricting permissions further (either something like process.permissions.deny('net') that immediately turns it off for the entire process or something like process.permissions.deny('net', () => { ... }) where the callback passed in runs with the more restrictive... with my preference being on the former).

For 3, keeping the model similar to Deno's is great but it's going to be very difficult to change later if we don't make it granular enough. Let's be sure to give this particular bit a good deal of thought.

@bengl
Copy link
Member

bengl commented Apr 20, 2022

  1. Should it be process or module scoped?

Multiple threads could run with different policy (with caveats), but that's not terribly different from process-level policies. So yes, process scoped. Module-level policies aren't possible to enforce without non-trivial barriers between them and even at that point you run into logical nightmares as pointed out by @UlisesGascon.

  1. Should the user be able to change the permissions in runtime?

Much like with other well-known and well-used permissions systems, code ought to be able to decide it can drop privileges, but never be able to grant itself any expanded privileges.

If yes, how does it behave in an Asynchronous Context?

It should be applied synchronously, much like other things that modify process-level behaviour do today. For example, changing environment variables or using process.setuid() are synchronous operations that take effect immediately. Attempting to have policies follow through with async context (like AsyncLocalStorage does) means invoking async_hooks and the performance hit from that, and ensuring the correctness of async_hooks, even in weird security scenarios, which is non-trivial at the very least.

If I'm reading the example correctly, both calls to check() happen at least 1000ms after deny() has been called, so under my suggestion, the second check would also return false.

  1. What would be the desired granularity level?

Filters on any given API should eventually give the ability to filter on individual operations and individual resources. For prior art on this, see https://web.archive.org/web/20190821102906/https://intrinsic.com/docs/latest/index.html, which describes a (now-defunct, but previously working) policy system for Node.js.

An MVP blocking entire subsystems is fine for an MVP so long as the configuration format doesn't preclude any further detail in filtering.

@mcollina
Copy link
Member

I think we must provide some level of restrictions on the file system or what host/port a process can connect or listen to.

As an example, I might want to limit the folder in which a process write files to the tree of folders from the nearest package.json. This will severely limit the surface attack of a malicious dependency.

@vdeturckheim
Copy link
Member

What were actually the points that prevented the previous iterations. Given the current discussion, I am under the impression that at least one prior PR did most of what we are heading to?

@jasnell
Copy link
Member

jasnell commented Apr 21, 2022

The thing that stopped the prior effort was lack of engagement... We couldn't get anyone else to engage in the conversation enough to move it forward. Hopefully the timing is better this time

@RaisinTen
Copy link

What would happen if we start node with --policy-deny=net and the main process spawns a child process? Will the child process be able to access the net?

@jasnell
Copy link
Member

jasnell commented Apr 21, 2022

@RaisinTen ... see the discussion on that point here:

A core part of Node.js is the ability to spawn child processes and load native addons. It does no good for Node.js to restrict a process’s ability to access the filesystem if the script can just turn around and spawn a separate process that does not have that same restriction! Likewise, given that native addons can directly link to system libraries and execute system calls, they can be used to completely bypass any permissions that have been denied.

The assumption, then, is that explicitly denying any permission should also implicitly deny other permissions that could be used to bypass those restrictions. In the above example, invoking the node binary with --policy-deny=net would also restrict access to loading native addons and spawning child processes. The --policy-grant would be used to explicitly re-enable those implicitly denied permissions if necessary.

In other words, a process that has --policy-deny=net will not be able to spawn a child process by default. If the user explicitly allows it to spawn a child process, then it will be the users responsibility to pass along the correct arguments.

@Qard
Copy link
Member

Qard commented Apr 22, 2022

Personally I would be really interested in some sort of module-scoped solution, though I feel like that would likely be very complicated to manage given you'd have to handle permission delegation across the dependency graph. You'd probably need not only permissions to use certain things in a module but then also permission to delegate those permissions to its dependencies, and the config would likely get super complicated.

I feel like process-scoped permissions are too simplistic though. A user might turn on net access because one module needs it but then some other unrelated module does something nefarious and doesn't get blocked because net was allowed. 🤔

Perhaps an import/require-based system where a module needs permission to load another module? So it would just be unable to attempt to load a module it's not supposed to be able to use, throwing a permission error on import/require. It wouldn't be too terribly difficult to specify something like --allow=http:fastify,ws to specify that the http module should be allowed to be used directly by the fastify or ws modules, but no others.

@RafaelGSS
Copy link
Member Author

I feel like process-scoped permissions are too simplistic though. A user might turn on net access because one module needs it but then some other unrelated module does something nefarious and doesn't get blocked because net was allowed. thinking

Perhaps an import/require-based system where a module needs permission to load another module? So it would just be unable to attempt to load a module it's not supposed to be able to use, throwing a permission error on import/require. It wouldn't be too terribly difficult to specify something like --allow=http:fastify,ws to specify that the http module should be allowed to be used directly by the fastify or ws modules, but no others.

As described by others, I'm not sure if the complexity to handle module-scoped permissions is worth it. It seems the user can prevent the above behavior by specifying the allowed net port. For instance, --allow-http=3000, then even if another unrelated module does something nefarious, it wouldn't be allowed due the port is already in use & only 3000 is allowed.

@RafaelGSS
Copy link
Member Author

RafaelGSS commented Apr 22, 2022

It seems to have a consensus in a MVP with:

  • Process scoped
  • Only drop privileges in runtime
  • Flexible granularity (--policy-deny-net || --policy-deny-net=3000) - (--policy-deny-fs || policy-deny-fs=/usr/sbin)

Are we all in agreement on that? In case, I'll create a fresh PR (probably using most of the work @jasnell did in: nodejs/node#33504) and then we can iterate over that.

EDIT: Regarding the nomenclature (policy-deny= or policy-deny-net) we can discuss in the PR.

@bengl
Copy link
Member

bengl commented Apr 22, 2022

@Qard

Any permission system with support for multiple sets of permissions requires sufficient separation between the permission sets and the units of isolation they apply to. This means the units of isolation cannot communicate with each other except through strictly controlled interfaces (meaning no shared objects, not even globals). Without this, privileges can easily leak between the units of isolation. We simply do not have this between modules, and any sensible approach to having this between modules requires massive changes in the way we both import and call code from other modules.

To illustrate this, consider a module a.mjs and module b.mjs. Assume that through whatever permission system we create, we give a access to net, but deny that to b. A poorly written or malicious a could just assign all the relevant net methods to the global, and then b can use it.

A common suggestion is to track module usage via the stack, but this is not performant at all. Even if it were, there are other ways to share functionality and have the calling module appear to be the one with adequate permissions. You'd have to block transitive access, and then the number of related problems here starts expanding.

@Qard
Copy link
Member

Qard commented Apr 22, 2022

Sure, there's likely always going to be ways to get around it. But process-scoped just means rather than needing to do some environment hacking if you aren't the specific module the user intended to have access, you will just have access automatically by inheriting it from the process-wide config needed for something else. So you really aren't protecting much of anything if users are just always turning on net access. It seems to me that to adequately control I/O you need either module-level blocking of use in some way or super granular process-wide control of interfaces like saying http requests can only be made to this specific list of URLs or file system access only allows reading these specific files and writing these other specific files.

I don't think we can escape significant complexity without making any sort of policy system essentially useless.

@jasnell
Copy link
Member

jasnell commented Apr 22, 2022

To deal with more granular levels like per-module, what we would need is proper sandboxing of isolates and global scopes. We could set it up so that worker_threads can be launched with either the same or more restrictive permissions than the current thread; and if we ever did introduce a properly sandboxed isolate mechanism when we could launch those with their own permissions also.

@bengl
Copy link
Member

bengl commented Apr 22, 2022

@Qard

It seems to me that to adequately control I/O you need either module-level blocking of use in some way or super granular process-wide control of interfaces like saying http requests can only be made to this specific list of URLs or file system access only allows reading these specific files and writing these other specific files.

Yep! And I'm saying the latter is the only actually feasible one. The approach currently suggested by @RafaelGSS seems to be not as granular for permissions as the ideal, but keeping room open for it.

@jasnell We could do that, but to then have it be per module we'd have to consider:

  1. The weight of all those isolates.
  2. Providing and securing the interfaces between the isolates (non-trivial).

@jasnell
Copy link
Member

jasnell commented Apr 22, 2022

Yep, which is not a level of complexity we should take on initially.

@Qard
Copy link
Member

Qard commented Apr 22, 2022

Could probably make some changes to the vm module to make contexts more isolated and attach security controls to that. I'm definitely getting a sense though that whatever we do we can't really escape that it's going to be a whole lot of work and require substantial changes to many parts of the platform to reach any level of maturity. For sure we can start with some MVP, but I suspect that won't actually be all that useful short of serving as a proof-of-concept.

@MarcinHoppe
Copy link
Contributor

This is a great discussion and this effort is a great way to raise the the overall security bar in the Node.js ecosystem.

I noticed that this discussion became very detailed very quickly. I'd like to highlight some areas that would be good to address in parallel with the implementation work.

Community outreach and documentation

Introducing fundamental security changes to a mature technology is a delicate task. The new model may be surprising to a lot of Node.js developers. I think that in parallel with implementing the feature we should be thinking about how to explain the changes to developers, how to convince them it is safe to use and valuable for their applications. We should have a good idea about how to guide them when to use the new mechanisms and how to introduce them gradually into existing applications.

Threat model

I wonder if there has been any prior work done in documenting specific threats and attacks this work would prevent? I feel having a well developed threat model would allow us to better communicate the value proposition as well as the limitations of the solution.

Security testing

The threat models should clearly indicate what types of attacks will be prevented when this work has landed in Node.js. I think it would be super valuable if we had a plan on how to engage the security research community during development of these changes to help us find weak spots and identify limitations of the approach and the implementation.

I know these issues are somewhat tangential to the actual implementation work, and I'll be more than happy to start separate issues to discuss them in more depth without disrupting this discussion.

@UlisesGascon
Copy link
Member

UlisesGascon commented Apr 23, 2022

Great approach @MarcinHoppe! 🙌

We are running a parallel discussion in Slack regarding the documentation of Security Model and Threat models and current state of art. I prepared a proposal in web format to start the discussion in the Security WG (How to structure the documentation, what should content...). It would be fantastic if you can join the discussion and help us validate and evolve the proposal 🙏

@MarcinHoppe
Copy link
Contributor

I'll chime in that discussion on Slack!

@RafaelGSS
Copy link
Member Author

@MarcinHoppe Thank you! This strategy is also documented by nodejs/node#42709.

@brianleroux
Copy link

IMO this is security theatre. There are two audiences for runtimes security. People running workloads on the cloud. And people running local dev tools on their machine. With the cloud we can't rely on runtime security for governance. Locking down Node won't do anything for the overarching infrastructure requirements so, in most cases, the vendor will have a concept called IAM which enables complete granular security surrounding the runtime. Said another way, this feature won't make sense for the cloud. So that leaves devs running local dev tools. This audience is even less likely to use granular permissions to get the job done and will more likely run with --allow-all or equiv.

Overall I'm against more complexity, without much concrete benefit, and especially in the code loading paths which are too slow already compared to other JS runtimes.

@RafaelGSS
Copy link
Member Author

That's a fair point. I think the parallel discussion on Slack will bring relevant points towards this point of vision. I do feel that we can't generalize all the Node.js apps running in a cloud that ensures the application security using IAM, nor that local dev tools would run the application using --allow-all.

The developer environment is a relevant security threat, and Node.js doesn't provide a security mechanism to avoid malicious packages using host sensitive data

@mhdawson
Copy link
Member

mhdawson commented Apr 28, 2022

Threat model

This was discussed in the mini-summit and I think defining/documenting the security model as captured in https://github.com/nodejs/node/blob/master/doc/contributing/security-model-strategy.md#document-the-security-model and then the followon https://github.com/nodejs/node/blob/master/doc/contributing/security-model-strategy.md#document-threat-models-and-current-state-of-the-art are steps that benefits the Node.js project even if we don't enhance functionality but might also be the foundation for what @MarcinHoppe mentions as important for motivating/explaining any enhanced functionality. I think focusing on those to start will deliver value and make future discussion about specific enhancements easier.

@github-actions
Copy link
Contributor

This issue is stale because it has been open many days with no activity. It will be closed soon unless the stale label is removed or a comment is made.

@github-actions github-actions bot added the stale label Nov 28, 2022
@RafaelGSS RafaelGSS removed the stale label Nov 28, 2022
@LongTengDao
Copy link

LongTengDao commented Dec 5, 2022

As I said in #467, I think most important thing is that, in 99% case, we want an app's main part run in full feature mode while some dependency (and its dependency chain) running like a pure function, at most with limited fs/net ability:

current

// app
const a = require('a');
a(__dirname+'/tmp/name', 'data');
// a
module.exports=require('b');
// b
module.exports=require('c');

...

// y
module.exports=require('z');
// z
module.exports=(path, data)=>require('fs').writeFileSync(path, data);

proposal (minimal sample)

// app
const a = require.pure('a', {
    net: (address, port) => false,
    fs: (path, mode) => mode==='w' && path.startsWith(__dirname+'/tmp/'),
});
a(__dirname+'/tmp/name', 'data');

then there is only two context: app, and a-z.

@RafaelGSS
Copy link
Member Author

That's under development. I'm currently fixing a few tests, but the unpermissive approach is ready (merging it to nodejs/node#44004 pretty soon).

@tniessen
Copy link
Member

The goal for the Permission System is pretty straightforward. Supply Chain Attack is one of the attacks that anybody is vulnerable to nowadays. Today, if you install a package using npm i x, the package can exploit it in several ways. This security feature consists to grant powers to the developer in order to select which resource an application is allowed to execute

It's important to mention that, this feature will not mitigate supply chain attacks, but in theory, if this feature were used correctly, it can help. As I said previously, this is an MVP and once it releases and gets feedback from the community, it will be more accurate.

@RafaelGSS Could you elaborate on this? You say the goal is "pretty straightforward" and then talk about supply chain attacks, yet this seems to somewhat contradict this constraint:

  • It’s not a sandbox, we assume the user trusts in the running code

Is this contraint only true in the experimental prototype, or will it forever be an assumption for the permission model?

@RafaelGSS
Copy link
Member Author

Hi @tniessen! A few things changed before the PR implementation. I'll surely review it.

The permission model should mitigate supply chain attacks if used correctly by npm/yarn/pnpm scripts. Of course, Supply Chain Attack is a broad attack and the permission model will only help in resource access.

It’s not a sandbox, we assume the user trusts in the running code

Basically, my intention with this statement was to avoid users thinking they will be able to run any code when the permission model is enabled. Although the resources must be restricted, there are many other attack vectors. But, I think I should remove this statement since it's somewhat unrelated to the feature itself.

@tniessen
Copy link
Member

Thanks for the clarification @RafaelGSS :)

The permission model should mitigate supply chain attacks if used correctly by npm/yarn/pnpm scripts. Of course, Supply Chain Attack is a broad attack and the permission model will only help in resource access.

My understanding is that npm would have to at least disable native add-ons and child processes in order for this security mechanism to have any effect. So would this only be helpful for the subset of npm install executions that don't work with --ignore-scripts but that also don't use native add-ons or child processes? And in those cases, npm would have to fine-tune permissions to prevent, for example, uploading credentials from disk?

Also, it sounds like this mechanism does not have any effect on supply chain attacks that don't become active during installation but rather during normal execution, unless the user can disable subprocesses and native add-ons for the application as well.

@RafaelGSS
Copy link
Member Author

My understanding is that npm would have to at least disable native add-ons and child processes in order for this security mechanism to have any effect.

When using the permission model (--experimental-permission), child processes and worker threads are disabled by default -- we are discussing about native addons, see nodejs/node#44004 (comment)).

So would this only be helpful for the subset of npm install executions that don't work with --ignore-scripts but that also don't use native add-ons or child processes? And in those cases, npm would have to fine-tune permissions to prevent, for example, uploading credentials from disk?

So, we haven't thought too much about its interoperability with package managers. My intention is to land the permission model targeting runtime usage and allowing its expansion easily. I'm pretty sure once it gets landed we'll discuss the next steps to implement it by default on package managers. Ideally, each package should provide which install access it should have (in the case of postinstall scripts) and the developer should agree on that.

Also, it sounds like this mechanism does not have any effect on supply chain attacks that don't become active during installation but rather during normal execution, unless the user can disable subprocesses and native add-ons for the application as well.

That's partially correct. Although I agree that it requires work from package managers to specify the right flags for npm install command, the subprocess and native add-ons will be disabled by default (when using the permission model). As mentioned in the PR description, this is the foundation for all further permission logic, I'm pretty sure after releasing it and collecting the feedback from the ecosystem, we will have a better vision.

@mcollina
Copy link
Member

I might have lost the reasoning.. why it's not possible to support worker_threads?

@RafaelGSS
Copy link
Member Author

After talking with @addaleax and @jasnell about propagating the rules to the worker thread, we agreed that besides adding unnecessary complexity, it's too risky to do it (at least in this first iteration).
I believe it can be a thing for the next iteration, but we need to be careful with this implementation.

@jasnell
Copy link
Member

jasnell commented Jan 11, 2023

Definitely agree. Let's handle worker threads as a separate pr.

@mcollina
Copy link
Member

That's ok, thanks for the clarification!

@tniessen
Copy link
Member

@RafaelGSS Before nodejs/node#44004 lands, I want to say that I am having trouble following the arguments in this discussion. This is somewhat reiterating my concerns from last year's #791 (comment).

The main reason for implementing this at all is, as has been stated multiple times, to constrain the resources that can be accessed by supply chain attacks:

The goal for the Permission System is pretty straightforward. Supply Chain Attack is one of the attacks that anybody is vulnerable to nowadays. Today, if you install a package using npm i x, the package can exploit it in several ways. This security feature consists to grant powers to the developer in order to select which resource an application is allowed to execute

Yet, integration with package managers has not been discussed at all as far as I can tell. There are high-level ideas but I have yet to see specific implementation plans.

we haven't thought too much about its interoperability with package managers. My intention is to land the permission model targeting runtime usage and allowing its expansion easily. I'm pretty sure once it gets landed we'll discuss the next steps to implement it by default on package managers.

I think we can (and should) discuss some of the obvious challenges before landing any permission model.

For example, npm scripts in a package.json file are rather arbitrary shell commands, and npm merely adjusts the PATH environment variable to add executable files within node_modules/.bin to the PATH so that they can be used by the shell command.

It is extremely common to have scripts such as "node src/post-install.js", i.e., invoking node is often not up to the package manager but rather to the script, thus, it is up to the shell command which arguments to pass to the process. Similarly, the executable scripts within node_modules/.bin, at least on Linux, are regular JavaScript files with an added shebang (or, more precisely, symbolic links to rather arbitrary files), and it is non-trivial to pass additional arguments.

As @brianleroux wrote in #791 (comment), it may not be likely that this finds adoption in production environments, where granular access control through more mature and complete features is widely available. In development environments, however, it is very common to have an npm test script such as "ava" (or any other test runner). Not only would npm somehow have to pass arguments to the shebang line that the ava binary contains, but many test runners, including ava, rely on process isolation and thus happily spawn child processes for separate tests, which either has to be prevented or will allow circumventing all restrictions. It seems that securing npm test through the proposed permission model would require significant changes to Node.js, to package managers, and perhaps even to package.json files and the ecosystem in general.

Similarly, I made this assumption above:

So would this only be helpful for the subset of npm install executions that don't work with --ignore-scripts but that also don't use native add-ons or child processes? And in those cases, npm would have to fine-tune permissions to prevent, for example, uploading credentials from disk?

Following this, I am wondering what the main use cases for scripts are that we hope to cover with this. Compilation, for example, often resorts to child processes, which is incompatible with the proposed permission model. The npm documentation even contains examples for running make through npm scripts, which entirely bypasses any permission model within Node.js.


As @Qard stated above:

For sure we can start with some MVP, but I suspect that won't actually be all that useful short of serving as a proof-of-concept.

Beyond said proof-of-concept, we should have at least a rough roadmap for what we hope to accomplish. Otherwise, we are adding a lot of complexity and attack surface to Node.js without a good justification.

@Qard
Copy link
Member

Qard commented Feb 16, 2023

Perhaps we could introduce a new parallel to the "scripts" section of a package.json something like "tasks" which only accepts js files and that can use whatever permissions are provided when telling node to run the task? For example, a package.json with my-task mapped to ./tasks/my-task.js would be run with something like node task my-task and it would internally do the equivalent of node ./tasks/my-task.js automatically. With node task --allow-whatever my-task it would apply the allow-whatever permission. Package managers could be responsible for narrowing those permissions appropriately in their own installing/running of each task. Could probably even make it runnable as a specialized worker thread so package managers can run each task within their own process and narrow task permissions from their own permission set.

In any case, I feel like the level of complexity and ecosystem impact here probably warrants an actual in-depth RFC covering the steps from initial implementation to the ultimate goal of how permissions will be used in the future when it's "complete".

@RafaelGSS
Copy link
Member Author

RafaelGSS commented Feb 17, 2023

@tniessen thanks for raising this discussion. I will try to answer your questions inline and in the end, I'll share a bit about my point of view on this feature.

The main reason for implementing this at all is, as has been stated multiple times, to constrain the resources that can be accessed by supply chain attacks:

This information isn't totally accurate. But that's my fault. I've used supply chain attacks as an example because it's easy to understand the motivations and the intention of this model. However, this feature will not solve supply chain attacks entirely, at least not in this first iteration. As you described very well:

Yet, integration with package managers has not been discussed at all as far as I can tell. There are high-level ideas but I have yet to see specific implementation plans.

It is extremely common to have scripts such as "node src/post-install.js", i.e., invoking node is often not up to the package manager but rather to the script, thus, it is up to the shell command which arguments to pass to the process. Similarly, the executable scripts within node_modules/.bin, at least on Linux, are regular JavaScript files with an added shebang (or, more precisely, symbolic links to rather arbitrary files), and it is non-trivial to pass additional arguments.

There are some edge cases uncovered by this feature yet and I agree that the statement of solving supply chain attacks doesn't apply totally.

However, this PR is just the gateway for something bigger, and I totally agree with you that we should be discussing how to integrate it into the ecosystem, that's on me. I've been working on this and my plan right after landing this on the 'main' as experimental is to create a roadmap, for instance, we should support other resources (net, os) and npm hooks.

We can also argue that the utility of this feature is in the development workflow, as you said, usually, production environments are surrounded by tools that supersede it. However, I don't think having a permission model in Node.js is useless, I think this is a good addition to runtime access management. For instance, denying access to folders/files in runtime or as prevention to modules that perform any kind of file system access during runtime. Until package managers adopt it as mitigation, the utility of this implementation will live in runtime. But, as I said, this is just a gateway, I'm pretty sure once we land it and see the adoption we can explore several strategies around this. And well, if that proves useless or not worth it to maintain, I will be the first one to request its removal. As an experimental module, we can do it without issues.

For what it's worth, the permission model was previously researched by @addaleax and @jasnell, and we have iterated over it a few times since last year. If you folks want to share your point of view, it would be much appreciated.

@naugtur
Copy link

naugtur commented Feb 17, 2023

Let me help with addressing the need/usecase for this as I'm working on a parallel implementation (and also a second generation of it) and running it "in production".

Here's a MetaMask repository with a policy (that's what permissions specification is called there) that's generated and enforced by LavaMoat
https://github.com/MetaMask/metamask-extension/tree/develop/lavamoat/build-system

LavaMoat consists of 2 main tools:

  • allow-scripts - provides protections against lifecycle scripts and bin confusion, without a policy, although between LavaMoat and Endo we were discussing attempting to find a way to make that work too
  • lavamoat-node - generates and enforces per-package runtime policy.

For those of you who may enjoy a video format: https://www.youtube.com/watch?v=-IE4KzGdsUI

While it is not currently possible to apply a detailed policy to whatever runs as a postinstall script, turning them off by default is quite effective.

If we control the runtime environment and want to defend against malicious code being added to the software in our dependencies (especially things we actually call) - the policy is hard to bypass when running in hardened javascript (SES lockdown)

So yes, there's reasons to have this.

@RafaelGSS
Copy link
Member Author

The Permission Model was landed in the main branch. A roadmap issue was created #898 to discuss the future of this feature. Therefore, I'm closing it as completed. Thank you all for your valuable input.

@luislobo
Copy link

Sorry to post on an old thread, may be just direct me to the right place.

With this Permission Model, would it make sense to start also defining permissions in package.json modules?

I thought of that once one of those NPM security issues where an author started deleting files because of politics.

After searching found a blog post from @davidgilbertson https://david-gilbertson.medium.com/npm-package-permissions-an-idea-441a02902d9b discussing about it.

Anyway, again, may be just to be pointed to the right thread/topic/slack/discord to contribute?

Thank you

@tniessen
Copy link
Member

@luislobo Permissions are only supported at the process scope, that is, permission cannot be granted for some modules only. However, you might be interested in the existing policies feature, which are scoped to files/modules.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests