The Nixpkgs Security Tracker is a web service for managing information on vulnerabilities in software distributed through Nixpkgs.
This tool is eventually supposed to be used by the Nixpkgs community to effectively work through security advisories. We identified three interest groups that the tool is going to address:
Nix security team members use this to access an exhaustive feed of CVEs being published, in order to decide on their relevance, link them to affected packages in Nixpkgs, notify package maintainers and discuss the issue with other team members.
Nixpkgs package maintainers are able to get notified and receive updates on security issues that affect packages that they maintain. By discussing issues with security team members and other maintainers, they can further help on figuring out which channels and packages are affected and ultimately work on fixes for the issue.
Nixpkgs users are able to subscribe and stay updated on ongoing security issues that affect the packages they use.
The service is implemented in Python using Django.
Start a development shell:
nix-shell
Or set up nix-direnv
on your system and run direnv allow
to enter the development environment automatically when entering the project directory.
Currently only PostgreSQL is supported as a database. You can set up a database on NixOS like this:
services.postgresql.enable = true;
services.postgresql.ensureDatabases = [ "nix-security-tracker" ];
services.postgresql.ensureUsers = [{
name = "nix-security-tracker";
ensureDBOwnership = true;
}];
-
Create a new or select an existing GitHub organisation to associate with the application
-
For your GitHub user, in Developer Settings, generate a new personal access token
This is not strictly necessary just to run the service, but allows for more API calls and is therefore important for a production deployment.
- Generate new token
- In Resource owner select the GitHub organisation associated with the application
- In Repository access select Public Repositories (read-only)
- No other permissions are required
- Store the value in
.credentials/GH_TOKEN
- Generate new token
-
In the GitHub organisation settings, create an OAuth application:
-
In Personal access tokens approve the request
-
In Developer settings OAuth Apps, create a new application
Store the Client ID in
.credentials/GH_CLIENT_ID
-
In the application settings Generate a new client secret
Store the value in
.credentials/GH_SECRET
-
You only need real GitHub credentials to use the OAuth login feature. To get going quickly, set any values for secrets required by the server:
mkdir .credentials
echo foo > .credentials/SECRET_KEY
echo bar > .credentials/GH_CLIENT_ID
echo baz > .credentials/GH_SECRET
For now, we require a GitHub webhook to receive push notifications when team memberships change. To configure the GitHub app and the webhook in the GitHub organisation settings:
- In Code, planning, and automation Webhooks, create a new webhook:
- In Payload URL, input "https://<APP_DOMAIN>/github-webhook".
- In Content Type choose application/json.
- Generate a token and put in Secret. This token should be in
./credentials/GH_WEBHOOK_SECRET
. - Choose Let me select individual events
- Deselect Pushes.
- Select Memberships.
Set up the database with known-good values to play around with:
./contrib/reset.sh
Start the server and its workers:
hivemind
Run all integration tests:
nix-build -A tests
Run a smoke test:
nix-build -A tests.vm-basic
Interact with the involved virtual machines in a test:
$(nix-build -A tests.vm-basic.driverInteractive)/bin/nixos-test-driver
Whenever you add a field in the database schema, run:
manage makemigrations
Then before starting the server again, run:
manage migrate
This is the default Django workflow.
If you haven't changed the schema, using fixtures is faster than resetting the database completely.
Remove all data:
manage flush
A fixture file is availble for the shared
app, located at src/website/shared/fixtures/sample.json
.
To load it into the database:
manage loaddata sample
Where sample
is the name of the fixture JSON file.
Django will look inside the app folders for a fixture to match this name.
To create (or update) a fixture file:
manage dumpdata shared > src/website/shared/fixtures/sample.json
Add 100 CVE entries to the database:
manage ingest_bulk_cve --subset 100
This will take a few minutes on an average machine.
Not passing --subset N
will take about an hour and produce ~500 MB of data.
Evaluating Nixpkgs happens on a local Git repository. Start with creating a checkout:
manage initiate_checkout
The service will then listen on creation of channel entries in the database. These are made by the following command, which gets all recent channel branch evaluations and fetches the corresponding commits to the local Git repository:
manage fetch_all_channels
To run an evaluation of Nixpkgs manually, for example of the nixos-23.11
channel:
./contrib/get-all-hydra-jobs.sh -I nixpkgs=channel:nixos-23.11
and take note of the Git revision of Nixpkgs you're evaluating.
For a channel, this can be found in the associated git-revision
file, for example https://channels.nixos.org/nixos-23.11/git-revision.
The script will write to $PWD/evaluation.jsonl
.
This takes ~30 min on a fast machine and needs lots of RAM.
To get it faster, use this temporary file:
wget https://files.lahfa.xyz/private/evaluation.jsonl.zst
zstd -d evaluation.jsonl.zst -o ./contrib/evaluation.jsonl
Before ingesting, call manage runsever
and manually create a "Nix channel":
manage register_channel '<null>' nixos-unstable UNSTABLE
The "Channel branch" field must match the parameter passed to ingest_manual_evaluation
, which is nixos-unstable
here.
All other fields can have arbitrary values.
Add 100 entries for one evaluation of a channel branch, and provide the commit hash of that evaluation as well as the channel branch:
manage ingest_manual_evaluation d616185828194210bfa0e51980d78a8bcd1246cc nixos-unstable evaluation.jsonl --subset 100
Not passing --subset N
will take about an hour and produce ~600 MB of data.
If you have your SSH keys set up on the staging environment (and can connect through IPv6), you can deploy the service with:
./staging/deploy.sh
Add your SSH keys to ./staging/configuration.nix
and let existing owners deploy them.