- Background
- Setup
- File layout
./bin/node
./bin/npm
scripts/
main.js
package.json
static/*
static/user-uploads/*
lib/**/*.js
lib/server.js
lib/handler/**/*.{js,pug}
lib/framework/*.js
lib/framework/lockdown.js
lib/code-loading-function-proxy.js
lib/framework/module-hooks/*.js
lib/framework/module-stubs/*.js
lib/safe/*.js
pg/**
vulnerable/**
- What is a breach?
- What is not a breach?
- Reporting and verifying a breach
- Data collection
- Goals
- Getting Answers To Questions
Thanks for your interest in attacking the hardened Node demo app. Hopefully seeing what security machinery resists concerted attempt by skilled attackers will help the Node community better safeguard their users' interests.
This document explains how to access the target application, and what does and does not constitute a successful breach.
Slow Setup Ahead: The target application is meant to be run locally
on a machine you control.
Setup below builds a patched node
runtime from
source, which takes over several minutes, so it may be worth starting
that running while you browse this document.
The target is a simple social-networking web application that lets users post updates. "Setup" explains how to get it up and running. "What is a breach" and "What is not a breach" explain what constitutes a successful attack. Later, there are logistical notes on how to contribute breaches or report other issues. The final section discussion explains how this effort fits into a bigger picture.
The target application has an intentionally large attack surface.
This application should be easy to breach if it were not for the
security machinery under test, so if it resists breach, then the
security machinery provides value. Specifically, the target
application code does not filter inputs for dangerous patterns, does
not explicitly escape outputs, calls require()
with a URL path to
dispatch to HTTP request handlers, and composes SQL queries and shell
strings from untrusted strings without explicit escaping.
Thanks much for your time and attention, and happy hunting!
You may want to fork your own copy of
https://github.com/mikesamuel/attack-review-testbed since the review
rules allow for making edits that a naive but well intentioned developer
might make. If so, git clone
your fork instead.
You will need on your $PATH
- A modern version of
npm
(oryarn
) postgres
>= 9.5 and helpers likeinitdb
make
and a C++ build chain.
apt-get install npm postgresql
brew install node [email protected]
To fetch and build locally, run
# Checkout the code from Github. OR USE YOUR FORK HERE
git clone https://github.com/mikesamuel/attack-review-testbed.git
cd attack-review-testbed
scripts/preinstall.sh # Builds patched node and npm. SLOOOW
export PATH="$PWD"/bin:"$PATH" # Use locally built node and npm
npm install # Fetch dependencies
npm test # Run tests to generate files
npm start # Start a demo server
Doing so will not run any git hooks, but does run some npm install hooks. If you've a healthy sense of paranoia, you can browse the hook code.
If everything did not go smoothly, please try the wiki which will provide troubleshooting advice and/or try the support forum.
If everything went smoothly, you should see
...
Serving from localhost:8080 at /path/to/attack-review-testbed
Database seeded with test data
Browsing to http://localhost:8080/ will get you to the target app.
You can also run the vulnerable server via
npm run start:vuln
.
After you've finished attacking, we'd appreciate it if you can fill out the post-attack questionnaire.
A patched node runtime and bundled npm that provide hooks used by the security machinery that I hope you will attack.
You can find the patch at ./bin/node.d/node.patch
.
Various scripts used by hooks and other command line tools.
bin/npm start
and bin/npm run start:vuln
should spin up a server
so you probably won't need these directly unless you make extensive
edits to the demo app.
The application main file responsible for specifying module loader hooks and other node flags, parsing argv and the environment, starting up an HTTP server and connecting to a Postgres database.
./main.js --help
will dump usage.
Configuration for key pieces of security machinery lives here.
See especially the "mintable"
and "sensitive_modules"
keys
which are unpacked by lib/framework/init-hooks.js
.
Client-side JavaScript, styles, and other static files served by the target application.
Files uploaded by users via the /post
form.
Server-side JavaScript that is meant to run in production.
Test only code is under test/
.
Responsible for starting up the server and dispatching requests
to code under lib/handler
.
lib/server.js
will delegate handling for a URL path like /post
to lib/handler/post.js
which, by convention, renders HTML using
lib/handler/post.pug
.
Security machinery.
Attempts to make user-code less confusable by preventing mutation
of builtins. myFn.call(...args)
should reliably call myFn
.
Allows new Function(...)
to load code for legacy modules even
when the patched node
runtime is invoked with
--disallow_code_generation_from_strings
.
Ideally, new Function
in well understood core modules should
continue to work, but implicit access to new Function
should not.
Specifically, ({})[x][x](y)
should not load code
when an attacker causes x = 'constructor'
.
Hooks that run on require
that check resource integrity and isolate
most user code from sensitive modules like child_process
.
Stub out legacy modules that the target application should not need but which load due to dependencies.
For example, the target application uses DOMPurify to sanitize HTML which loads jsdom, a server-side HTMLDocument emulator. We do not need the bits of jsdom which fetch CSS and JavaScript.
Safe-by-design APIs and wrappers for commonly misused, powerful APIs.
Database files owned by the locally running Postgres server.
scripts/run-locally.js
sets up a server and a Postgres process that
communicate via a UNIX domain socket.
A vulnerable variant of the server created by applying
vulnerable.patch
to the server source files. This server has most
of the mitigations disabled, so you can test attacks against it and
then see if they still work against the target server.
The target application was designed to allow probing these classes of vulnerability:
- XSS: Arbitrary code execution client-side.
- Server-side arbitrary code execution
- CSRF: Sending state-changing requests that carry user credentials without user interaction or expressed user intent.
- Shell Injection: Structure of shell command that includes crafted substrings does not match developers' intent.
- SQL Injection: Structure of database query that includes crafted substrings does not match developers' intent.
Anything that directly falls in one of these classes of attack is in bounds, but please feel free to consider other attacks.
If the only thing preventing an XSS is the Content-Security-Policy header, then you have a breach. We treat CSP as a useful defense-in-depth but it is a goal not to rely on it.
The target application disables X-XSS-Protection since that has a poor history of preventing determined attack. While finding X-XSS-Protection bypasses is fun, we'd rather attackers focus on the target app.
find lib -name \*.js | xargs egrep '^// GUARANTEE'
will list documented security guarantees. Compromising any of these guarantees by sending one or more network messages to the target server is a breach.
It is a goal to allow rapid development while preserving security properties. Files that are not part of a small secure kernel should not require extensive security review.
Feel free to change files that are not marked SENSITIVE (
find lib -name \*.js | xargs egrep '^// SENSITIVE
) or to add new
source files. If you send a pull request and it passes casual review
by @mikesamuel
then you can use those changes to construct an
attack. It may be easier to manage changes if you use your own
fork.
If your PR includes files marked SENSITIVE, then those files will receive a higher level of scrutiny.
For example, enumerating the list of files under static/user-uploads
to access uploads associated with non-public posts would be a breach,
but not if it requires an obvious directory traversal attack involving
require('fs')
in naive code.
Do not send PRs to dependencies meant to weaken their security
posture. If you have an attack that is only feasible when your files
that pass casual review live under node_modules
, feel free to put
them there but do not, under any circumstances, attempt to compromise
modules that might be used by applications that did not opt into
attack review.
Instead of attacking the web application and coming up with a patch you may directly attack the security machinery under test:
- module-keys aims to provide identity for modules. Stealing a module private key is a breach.
- node-sec-patterns aims to protect minters for contract types. Crafting a contract value that passes a contract type's verifier in violation of the configured policy is a breach. Also, stealing a minter is a breach.
- sh-template-tag and safesql attempt to safely compose untrusted inputs into trusted templates. Finding an untrusted input that can violate the intent of a trusted template string that a non-malicious developer might plausibly write and that would seem to operate correctly during casual testing is a breach.
- pug-plugin-trusted-types aims to do the same for HTML composition. The same plausible template standard applies.
- web-contract-types provides contract types for HTML, URLs and other webapp languages. It also has some APIs that mint contract values.
If you have any questions about what is or is not in bounds, feel free to ask at the support forum.
It's much easier to test via http://localhost:8080
but we assume
that in production the target application would run in a proper
container so all access would be HTTPS. Attacks that involve reading
the plain text of messages in flight are out of bounds. Anything in
the security machinery under test that might contribute to an HTTPS
→ HTTP downgrade attack would be in bounds.
Similarly network level attacks like DNS tarpitting are out of bounds since those are typically addressed at the container level.
We assume that in production the target application would be one machine in a pool, so DDOS is out of bounds.
process.bindings
is a huge bundle of poorly governed, abusable
authority. Efforts to address that are out of scope for this project,
so attacks that involve abusing process.bindings
are out of bounds,
but if you find something cool, feel free to report it and maybe we'll
summarize reasons why process.bindings
needs attention.
Normally, the database process would run with a different uid than the server process and its files would not be directly accessible to the server process. Direct attacks against the database process or files it owns are out of bounds. The database only runs with the same uid so that startup scripts don't need to setuid.
Startup and build scripts assume that you have installed dependencies
like make
, pg
, and npm
on a $PATH
that you choose. Attacks
that install Trojans into readable directories on $PATH
are out of
bounds, but attacks that caused the server to spawn another server
with an attacker-controlled $PATH
environment variable would be in
bounds.
We assume that in production the server runs as a low-privilege user.
Attacks that requires running the server as root or running the server
with ambient developer privileges (like git commit
or npm publish
privileges) are out of bounds.
We assume that in production the server would not have write access to
the node runtime, so attacks that require overwriting ./bin/node
are
out of bounds, but attacks that overwrite source files are in bounds.
We log requests including nonces. This log would not be part of any production system, so attacks that involve stealing secrets from it are out of bounds.
Attacks that involve socially engineering other attack reviewers or attacking the hardware of another reviewer are out of bounds. This includes exfiltrating log files from other attackers that might contain information about what they've tried.
If you're submitting a sneaky pull request meant to pass casual code review, we understand that you may need to socially engineer your PR's reviewers. Attacks against hardware used to review or approve your sneaky PRs are out of bounds.
Attacks against GitHub and its code review applications are out of bounds.
Submitting malicious code to dependencies of the attack-review-testbed is out of bounds.
Attackers and defenders should treat one another in a collegial manner. See the JS Foundation code of conduct if you're unsure what that means.
To report a suspected breach, go to issues/new?template=breach.md and fill out the fields there.
Clicking that link will take you to a breach report template which explains what info to include. As mentioned there, you can draft your report as a secret Gist which lets you save progress, and then just dump a link to the Gist in place of a report.
If we verify a breach, we will add either add the full breach label or the partial breach label.
If we can't verify, we may ask questions.
If a breach is similar in mechanism to one previously reported, we may mark it as a duplicate.
If a breach builds on an earlier reported breach but surpasses it in effect or generality, then we may mark the earlier a duplicate of the latter, but the earlier reporter will still get full credit for their report.
If a breach report does expose a vulnerability in a third-party module, then we reserve the right to edit the report to elide details while we coordinate with the upstream maintainers.
If you want to report a breach that relies on a vulnerability in a third-party module, feel free to DM @mvsamuel on Twitter. I can help with disclosure, or I can keep that as a record that you get credit as first reporter even if you're not comfortable posting details.
If there's any dispute over who reported a breach first, we will take secret Gists' commit history into account, and may decide that a vulnerability was independently discovered by multiple reporters.
Running scripts/build-vulnerable.sh
will copy the server source files
over to a directory vulnerable/
.
You can then run vulnerable/scripts/run-locally.js
to start up the
modified server.
The vulnerable server has most of the mitigations disabled, so you can try an attack against the vulnerable server. If it doesn't work against the target server, then the difference between the two mitigated the attack.
When you run your server using scripts/run-locally.js
it appends to a
log file, request.log
. The information it logs is at most:
- The content of every HTTP request to the target server which includes HTTP response bodies with uploads.
- The content of every HTTP response issued by the target server.
- Timestamps
- The hash of the most recent git commit so we can try to correlate log entries with sneaky PRs.
None of this logged information is collected from your machine.
We will request that you send us these logs so that we can try and replay attacks against a target server with security machinery selectively disabled to try to quantify the attack surface that each mechanism covers.
We will try to not to collect $USERNAME
, $HOME
, and other information
that might identify a real person, and to sanitize any snippets of logs used
in any publication.
Submitting these logs is entirely voluntary and you may edit them before sending. If you realize there's something in there you included unintentionally we will respect all requests to delete or redact logs received prior to publication. We will provide a hash of any log you submit; including that hash will make it easier for us to honor such requests.
We hope to clarify the claim
It is easier to produce and deploy code on the hardened node runtime that resists the classes of attack than it is to produce and deploy vulnerable code.
We assume that developers are not malicious but do not consistently work to avoid security problems. Insider threats and supply-chain security are important issues but are out of scope of this project.
We focus on classes of attack related to the integrity of messages that cross process or network boundaries since many of these are often missed by good development practices like code review and unit-tests.
- Some security relevant code is indistinguishable from application logic, e.g. access control logic.
- Some is not, e.g. input parsers, and HTML templates.
- Probabilistic arguments about distributions of inputs apply to (1); test coverage and code review give confidence in application logic correctness.
- Attackers craft strings that sequentially target corner cases so probabilistic arguments do not apply to (2); test coverage and code review alone are poor predictors of parser correctness.
- We can best use scarce security engineering resources best by leaving (1) to application developers and focusing on (2).
If the target application largely resists attacks or would have with adjustments, then we can argue that it is easier to produce robust code than vulnerable code on a hardened Node.js stack.
Should that argument hold we hope to work with framework authors, and upstream fixes so that it is not just easier to deploy robust software on the hardened node stack, but easy.
Our end goal is that large application development teams should be able to confidently produce secure systems by using a hardened stack and granting a small group of security specialists oversight over a small kernel of critical configuration and source files.
Post on the # attack-review channel at https://nodejs-security-wg.slack.com/ if you're having trouble.
We'll try to update the wiki in response to common questions.
If you need a private channel:
Contact | Availability | DMs |
---|---|---|
Mike Samuel | US EST (GMT+5) | twitter/mvsamuel |
This is not an official Google product.