Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mechanism to constrain usages of unsafelyCreate #31

Closed
xtofian opened this issue Nov 30, 2017 · 7 comments
Closed

Mechanism to constrain usages of unsafelyCreate #31

xtofian opened this issue Nov 30, 2017 · 7 comments

Comments

@xtofian
Copy link

xtofian commented Nov 30, 2017

When applying safe-value types in large scale software projects at Google, we've found it to be essential (see "Background" below) that there is a reasonably strong and automated mechanism to constrain wide-spread usage of unchecked conversions from string into a safe-value type (i.e., calls to unsafelyCreate).

Requirements

  • It must be feasible for projects to surface reminders of coding and review guidelines to the project's developers if a commit / pull request introduces a new unsafelyCreate call site.

  • It must be feasible for projects to centrally maintain an enforced whitelist of all call sites to unsafelyCreate (such that it is impossible to commit code that introduces non-whitelisted call sites). This allows the project's security expert to review all new usages for correctness, and to recommend alternatives to call-sites that aren't strictly necessary (and therefore undesirable, since they dilute the reviewability of the codebase)

Third-party library concerns

Many libraries, such as jquery, expose APIs whose inputs flow down to DOM injection sinks (e.g., jquery's html(...) function flows down to .innerHTML). I.e. these APIs effectively are injection sinks on their own.

In the presence of TrustedTypes, these libraries should be refactored such that they themselves accept TrustedHtml etc and forward those values to the underlying DOM sink. However, if existing APIs perform any sort of transformations or construction of HTML markup on their own (this seems to be the case for jquery for instance), it's likely that the library's authors will resort to unsafelyCreate.

An application's developers and security reviewers then effectively have to trust the library's author to have used unsafelyCreate in a correct (and ideally, reviewable) way.

A particular concern is that third-party library authors might simply wrap all uses of DOM sinks with unsafelyCreate in order to preserve existing functionality with minimal effort. Unfortunately, this will also preserve all vulnerabilities in the library, and the risk of vulnerabilities due to use of XSS sinks exposed by the library's API.

Possible mechanisms to constrain usage

  • For JS source code that is subject to compilation a mechanism built into the compiler can be used, such as the Closure Compiler JS Conformance Framework.

  • In smaller projects a grep-based pre-commit hook might be sufficient; for this the only requirement is that unchecked conversion functions have reasonably unique names (to avoid false-positives). unsafelyCreate seems reasonable, but it might be helpful to include the produced type name in the function name, i.e unsafelyCreateTrustedResourceUrl. Since these functions should be rarely used, an unwieldy name is not a concern and is in fact desirable.

  • The above mechanisms will only constrain use of unsafelyCreate within the source of the application itself, but not in its third-party dependencies (unless they're checked into the same repository). In particular, they can't constrain use of unsafelyCreate in libraries that are script-src'd directly from a separate server.

    It might be desirable to have a browser-level mechanism to permit use of unsafelyCreate only in specific, whitelisted sources. Perhaps something along the lines of a CSP header trusted-types-src 'nonce-xyz123', such that unsafelyCreate calls are only allowed from within scripts blessed by that nonce. This would allow an application developer/deployer to ensure that unsafelyCreate calls only occur in their own, security-reviewed code, and not in third-party libraries.

    Whitelisting based on CSP source is rather coarse-grained. Ideas for more fine-grained whitelisting:

    • Some mechanism to whitelist based on call stacks (or call stack suffixes). Not clear where those would be specified.
    • Perhaps each call to unsafelyCreate requires an additional nonce argument: The CSP header specifies a nonce, trusted-types-unsafely-create-nonce abc123. Each call to unsafelyCreate must be provided with the correct nonce, TrustedResourceURL.unsafelyCreate('abc123', 'https://static.example.com/my-lib.js'). (of course this is only effective if the nonce cannot be directly obtained from JS). This makes calling unsafelyCreate rather cumbersome (but again, that's desirable). In particular, source that calls unsafelyCreate must be rewritten before serving to inject the nonce. This should also create an incentive for library authors to not call unsafelyCreate, but rather refactor code to accept TrustedTypes in their API.

Background

The key benefits of types-based approach to preventing injection vulnerabilities is two fold:

  1. It replaces APIs that can potentially result in security vulnerabilities (i.e., injection sinks such as, .href or .innerHTML assignment) with APIs that are inherently safe (for instance, Closure's goog.dom.safe.set{Location,Anchor}Href and setInnerHtml, or, in this proposal the native href and innerHTML setters that only accept TrustedTypes). If an application only uses the DOM API via such mediated, inherently-safe APIs, application developers can no longer make mistakes in their use of the DOM API that result in XSS vulnerabilities.

  2. It makes it feasible for a security reviewer to arrive at a high-confidence assessment that an app is indeed free of (DOM) XSS vulnerabilities: She no longer has to audit every use of a DOM XSS sink throughout the app, and data flows therein; instead it's sufficient for her to audit only the code that constructs instances of the safe-value types (i.e., call sites of unsafelyCreate, and their fan-in).

IOW, the types-based approach replaces many XSS sinks (DOM APIs and properties) and typically many call sites throughout an application, with with fewer sinks (unsafelyCreate) with typically few call sites.

To achieve its benefits both in terms of reducing actual XSS risk and facilitating efficient high-confidence security reviews, it is therefore crucial that call sites of unchecked conversions (i.e., unsafelyCreate) are,

  1. rare, and ideally only occur in stable, security-reviewed library code, and

  2. effectively reviewable (i.e. are not used in functions that in turn re-export an injection sink).

(corresponding guidelines on usage of unchecked conversions for SafeHtml types here, https://github.com/google/safe-html-types/blob/master/doc/safehtml-unchecked.md).

We've found that in practice, individual developers sometimes (often?) do not consider the impact on the above whole-system emerging properties (i.e., residual XSS risk and feasibility of high-confidence security assessments) of a use of an unchecked conversion they're introducing. We therefore found it necessary to introduce static checks that disallow uses of unchecked conversions without security review (in our case, we rely on the package visibility mechanism in the bazel build system, and in some cases compile-time checks implemented in error-prone).

Anecdote:

The concept of safe-value types was first introduced in Google Web Toolkit. The GWT SafeHtml type offers an unchecked conversion fromTrustedString. Its documentation indicates that its use is strongly discouraged.

Nevertheless, a few years after introduction of this API, we found O(1000) call-sites across GWT-based application source. Almost all of these were unnecessary (in the sense that they could have been rewritten using inherently-safe APIs such as SafeHtmlTemplates), and several of them did indeed introduce XSS vulnerabilities.

@koto
Copy link
Member

koto commented Dec 1, 2017

I agree; the actual security Trusted Types would bring is directly dependent on being able to constrain unsafe creations. To distill the possible ways of addressing that:

  1. Leave that out of the spec, but make it possible to guard creations in compilers / liters / static code analyzers. This is where the majority of Google JS code is nowadays. In addition to that, unsafe constructs should be greppable. We might change the name of the functions to unsafelyCreateTrustedXYZ like you suggested, that's a small improvement.

  2. Block unsafeCreation using a script-specific policy - e.g. CSP trusted-types-src source. I'd rather not bind ourselves to CSP just yet, but the idea of having a script-specific limitations is interesting, and we've looked into it as well. I imagine implementing script-specific constraints would be hard - @mikewest, WDYT?

  3. More fine-grained runtime limitations. This is something me and @slekies looked into. Some of the ideas may be implemented purely in JS, without any spec changes. For example:

    (function(global, myOrigin) {
      const createHTML = TrustedHTML.unsafelyCreate.bind(TrustedHTML);
      const createScriptURL = TrustedScriptURL.unsafelyCreate.bind(TrustedScriptURL);

      global.TrustedHTML.createSanitizedHTML = html => {
          if (html.indexOf('<') == -1) {
            return createHTML(html);
          } else {
            return createHTML(aDecentSanitizer.sanitize(html));
          }
      };

      global.TrustedScriptURL.createWhitelistedURL = url => {
          let u = new URL(url);
          const whitelist = [
            'https://third-party.example.com',
            myOrigin
          ];

          if (whitelist.indexOf(u.origin) !== -1) {
            return createScriptURL(url);
          }
          throw new TypeError(`Origin of ${url} not whitelisted as TrustedScriptURL.`);
      };

      delete TrustedHTML.unsafelyCreate;
      delete TrustedScriptURL.unsafelyCreate;
    })(window, location.origin);

To facilitate locking down, we might want to expose an API to facilitate adding such app-specific, safe creation methods, with the function to lock down the setup completely (see e.g. koto/trusted-types@templatestrings-es6...koto:lockdown). This approach has the limitation of being runtime (so any code run before might be able to circumvent it), and, being in JavaScript, it doesn't propagate to new documents e.g. in iframes.

The lesson to take from GWT, is that maybe we should look into disallowing unsafe creations by default (Speaking in CSP terms, allowing them would require an unsafe- prefix). That can't be done, however, if we don't offer ways in the platform to create a working application without them. We need to build in support for custom, "blessed", builders and allow a richer set of native safe builders.

@koto
Copy link
Member

koto commented Dec 1, 2017

FYI, I extracted the granular enforcement of types proposal into #33.

@xtofian
Copy link
Author

xtofian commented Dec 5, 2017

Re: propagating to iframes: I think it's OK to not propagate TrustedTypes enforcement to iframes: If the iframe is same-origin, its source is controlled by the same organization, and it's up to them to ensure that all their endpoints enable TrustedTypes. And if it's a different origin, it doesn't matter.

Re: globally disallowing unsafelyCreate: Based on our experience, I don't think that's feasible in practice.

There are three broad categories of uses of unsafelyCreate:

  1. Use inside builders and frameworks. These uses establish the "core semantics" of the types (e.g., run-time predicates that recognize URLs that are considered safe in the context of a given application or group thereof; builders for TrustedHTML that support concatenation and construction of simple markup; contextually escaping HTML templating systems to construct complex markup).

  2. Special-case uses throughout application code. These accommodate special cases required by a particular application. For example, functions that construct URLs to deep-link into mobile applications (e.g. via Android intents or custom schemes); these URLs are typically too specialized to include in standard library of builders. Of course, they could be factored out into application- or organization-specific libraries, but if there are only few call sites the added friction of doing so is undesirable.

  3. "Legacy conversions" in existing code: When adapting existing code to use TrustedTypes (or Closure SafeHtml types), it's often necessary to refactor injection sinks in library APIs via multiple steps. To change an API that consumes a string that is passed to an injection sink (e.g. .innerHTML) to instead accept TrustedHTML, all its transitive callers need to be adjusted to actually supply that type. In large code-bases this can be difficult to do in a single atomic change. It is often more practical to change all immediate callers to (unsafely) convert whatever string they have to the required type, and then proceed outwards in separate changes. During large refactorings, we've found it useful to employ separate "legacy conversion functions". This distinguishes code that directly calls the "proper" unchecked conversion, and which should have been security-reviewed to be actually-safe, from legacy code that treats strings as HTML without security review (and which should be further refactored).

In a typical application, there should be only very few of (2) (say, a handful at most even in a fairly complex application), and none of (3) in newly developed code. Legacy code (gradually adopting the use of TrustedTypes) might have many of (3) however.

@mikesamuel
Copy link
Collaborator

https://www.npmjs.com/package/node-sec-patterns is one way project maintainers can constrain which NPM modules mint values that encapsulate security guarantees via a minter/verifier model where the verifier is ambiently available but access to the minter is checked.

@koto
Copy link
Member

koto commented Feb 19, 2018

Regardless of the resulting API, there will be primitives available to user land code allowing to create trusted values (let's call them TT minters for analogy with https://github.com/mikesamuel/node-sec-patterns) i.e. values that would be allowed to be put into DOM even when TT enforcement is enabled. Those primitives will likely be references to functions - in the current iteration it's TrustedFoo.unsafelyCreate references.

Either the platform features, or user land hardening code (which might be offered as a library) needs to guard the access to minters. Even if platform offers those, such methods need to be polyfillable, and should resist most known bypass methods - at least the ones that could not be detected at compile time.

https://github.com/mikesamuel/node-sec-patterns offers a way of guarding function references by:

  • Using the node module loading and node packages meta data to store the information about the whitelisting.
  • Using call-stack inspection to prevent calling the minter function from non-whitelisted callsites.
  • Tracking meta information about minted instances in a WeakSet. Obtaining an instance of an object is not enough to bypass that check (it's stronger than instanceof, which can be bypassed with Object.create. Only an instance that was minted will pass verification.
  • Lots of defensive programming - freezing instances, making properties not configurable etc.

So, in the end, the minter references are avaliable globally (after a given module is loaded), but access to them is protected at runtime. While we can not fully adopt all the checks (there's no client side package.json equivalent), I think this offers a useful enhancement to a possible hardening library (or the polyfill for the native functionality). I would still like to be able to combine that with additional checks based on e.g. unavailability of the reference to the minter in a certain context, or that the minted value is an instanceof something.

@mikesamuel
Copy link
Collaborator

there's no client side package.json equivalent

Yeah. Not having a reliable name for a compilation unit makes things difficult.
ES6 modules are a bit better, but I don't know how easy it is to get from an ES6 module to a canonical name.

If we did have a way to reliably determine a name for a compilation unit, we could bootstrap something by looking for a whitelist under /.well-known per RFC 5785.

That might make source-maps security-relevant though.

@koto
Copy link
Member

koto commented Jun 4, 2018

I believe this is implemented with the policy parts of the new API. Please reopen if something's missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants