-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingestion of evaluation results of any supported channel #8
Comments
I should also be able to use a management command to seed my evaluation results in my database without having to go through |
I believe that to track the channel bumps the easiest way is to regularly fetch https://nixos.org/channels data |
It's not certain this API will stay available on the long term, I don't advise to use it. |
Well, the way the other tools do it is by cloning the repo and looking at the ref of the branches linked to the channels |
I think that's the sure way to go, or you can listen to events of the GitHub repository of nixpkgs. |
So ingestion was implemented of manually evaluated nixpkgs. All that's left is, for "perfect":
In the meantime, what we can hack is:
|
We need to add the meta attributes in the ingester:
|
The security tracker acts on supported channels, and we need to ingest an evaluation of all nixpkgs for any given supported channel at any point in time.
The tracker should subscribe to channel bumps (open problem), see how https://git.qyliss.net/pr-tracker detects them and how https://git.eno.space/label-tracker.git/ tracks them.
Proposal of implementation
Every time a channel bump, repull the repository, extract a worktree of that channel (or git clone via the reference for fast checkout), run
nix-eval-jobs
on that commit sha and collect the result and archive it as JSON with meta results (!!!).Run this as a background job or a cron job that can easily be managed by infrastructure people or administrators to perform any maintenance task like cancelling evaluations, restarting evaluations, configuring the number of concurrent evaluations, etc.
Ideas for the future
Expose this data of evaluations publicly and let people access it directly, it's useful in general.
The text was updated successfully, but these errors were encountered: