-
-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lockfiles bloat the Nixpkgs tarball #327064
Comments
This issue has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/cargo-lock-considered-harmful/49047/2 |
I have amended the OP with another possible solution. |
I think for externalizing lockfiles itd be a good idea to actually determine which other distros do similar things to nixpkgs (vendor their own lockfiles for reproducibility), before commiting to such an idea. Main reason I say this is I love the idea, but I think it could easily become management nightmare when things are externalized this way, and would really only be worth the effort if and only if its actually maintained by a larger team outside of Nix, such as any of the aforementioned distros. |
Lockfiles are sadly a necessity whenever dependencies aren't pinned (and even then parsing lockfiles can be better than a FOD alternative). IPFS for the external lockfile repo seems like it'd be a good fit? Just pin the files after merging the PRs. Of course, hosting them normally is an option as well, but all potential nixpkgs contributors will need upload access for WIP PRs. The problem with external lockfile repo is that we'd have to completely ditch lockfile parsing (as it would require IFD) and switch to FODs, which may force us to rewrite some Nix code (currently, Gradle support does that, so it would be affected) and maintain more hashes. It still seems like the better option out of the two though. |
Interesting thought but the problem with IPFS remains that we need someone to pin the files or they will inevitably be lost.
Anyone can create a PR. Ideally though, we wouldn't even let users upload lockfiles and rather have them be generated by some trusted infrastructure with users merely providing upstream versions they need to have a lockfile for. Remember, lockfiles are security-critical.
Given the performance issues of
Note that the need for this to happen exists on the time scale of months~years, not days~weeks. Also, not all lockfiles must necessarily go but there must be some sort of limit how much of our "data budget" we use on them. |
I migrated Nim to to lockfiles and it has fixed a lot of problems but the lockfiles are only getting bigger. I'm in favor of deduplicating the contents of the lockfiles in centralized place but I think it would take special tooling that would be somewhat consistent across languages. If we can make it clear that lockfiles and "supply-chain" security are one in the same then maybe we can get funding for a solution, but now I see that the NGI budget is getting cut. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This issue has been mentioned on NixOS Discourse. There might be relevant details there: |
Just throwing an idea here, what if we allowed "import from builtin" that would allow us to store lockfiles in a different repo, fetch them lazily and still use them at eval time. It would still slow down eval, but not nearly as much as arbitrary IFD. |
I'd like to point out that this is a space-time trade-off. This is also a negative for security. We have no insight into what a single hash represents in terms of dependency graph. |
That's a good point. I'd say that makes it a space-space trade-off though: Space in the tarball vs. space in the binary cache. I consider space in the tarball to be a lot more precious as it affects each and every user because of the tarball's status as the source of all truth. The tarball size is also only one order of magnitude greater than the size of all lockfiles, making lockfiles a significant contributor to bloat.
You don't have such insights at eval time but, while convenient, that not a necessity. You could just take a look at the dependency declaration file aswell as the vendor tarball to figure out the "full" dependency graph. |
This is not true. Binary cache size growth is a problem that cost some users dearly.
It is a necessity to statically reason about the dependency graph. Sure, you can write tooling that inspects derivation outputs, but that's another level of tooling complexity, and it makes it very expensive to scan a package tree. Additionally I've never seen a convincing overrides story for any FOD packager. I feel like we are sacrificing way too much about what makes Nix good with these hacks. |
Sure but, as I mentioned previously, "big" vendor FODs just simply aren't a great contributor here. It's not uncommon for output paths to be a few orders of magnitude larger than "big" vendor FODs and those change on every rebuild (x4 for all our platforms) while FODs only change on updates and usually are the same on any platform. As also mentioned, optimisations for the binary cache that IMV are unavoidable going forward such as deduplication will reduce the difference between "big" FODs and lots of tiny FODs to almost nothing. It's not a significant contributor to unsustainable growth currently and will likely even be less significant going forward; at the worst slightly less efficient than the alternative. I don't see a significant point to be had w.r.t. binary cache size.
The "cost" of big FODs only hits you when you're building stuff yourself and in that case you'd have to compare the 15-30MiB to the rest of the inputDerivation which, for a typical rust package such as
We all use Nix for this reason; I can feel you. I'd much prefer if we had a reasonably manageable package set a la haskellPackages instead of a separate subset package set for each drv which the current lockfiles represent. That'd allow for static reasoning aswell as sustainable tarball size & eval time growth but that's not the reality we live in: We have to choose one. Given that the use-cases for reasoning about the entire source dependency graph (remember: this is source code, not build artifacts) are rather fringe and could be done less elegantly through other methods, I see the trade-off in favour abstaining from lockfiles.
At a theoretical level, I don't see how it'd be any different to a lockfile packager. You'd pass a new/updated/different lockfile in either case but you'd have to update the vendor hash with a FOD packager which is a slight overhead and a little inefficiency but not unreasonably so.
I feel that both hacks sacrifice what makes Nix and Nixpkgs good; neither is ideal. The best solution is and always will be to do our job as a distro and define one package set for all dependant packages to use, making any lockfile irrelevant. That's really hard work of course though. |
Linking #333702 here, which is Rust‐specific but that i hope can point to a better approach for language ecosystems in general. |
This issue has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/state-of-haskell-nix-ecosystem-2024/53740/8 |
I was wondering if switching the compression algorithm may be worth it. I did some measurements myself on commit zstd -19
gzip -9
|
We do not control github's tarball compression. The only other place where the size of lockfiles matters is git which also only supports gzip compression. zstd or other means of compression are not relevant to this discussion. |
Well, if nix could decompress and cache files, say |
We'd then have the issue that we'd be committing binary files instead of text files. Git is at its best when working with text, especially when resolving merge conflicts. Although perhaps some specific lockfiles are already bad at avoiding merge collisions, so in those very specific edge cases we wouldn't be losing much by committing binary data... |
Decompressing lock files at eval time could wreak havoc on eval times. |
You could run a fast enough hash on it first and then use a cached copy of available. |
We would then keep a copy of nixpkgs with its compressed artifacts in the store, alongside a bunch of decompressed lock files cached to the store with no gcroot, just to save a few kilobytes over the wire. The decompression could in cppnix also require pausing eval like IFD currently does. It makes more sense for me to switch the releases.nixos.org tarballs to use ztsd and discourage using github tarballs |
Generally, Cargo.lock files are updated in tandem with the corresponding package's version, so this should generally not be an issue. However, the other drawbacks mentioned mean that this is still nonetheless a bad idea. |
Introduction
The size of the Nixpkgs tarball places burden onto internet connections and storage systems of every user. We should therefore strive to keep it small. Over the past years that I've been contributing, it has more than doubled in size.
In #327063 link I discovered quite negative effects of
Cargo.lock
files in the Nixpkgs tree with just 300 packages bloating the compressed Nixpkgs tarball by ~6MiB.Here I'd like to document the status quo of sizes of lockfiles found in Nixpkgs and other automatically generated files of significant size.
Methodology
ncdu --apparent-size
on the nixos-24.05 tree (a046c12)Cargo.lock
file (a few dozen KiB)gzip -9 < file | wc -c
ortar -cf - files... | gzip -9 | wc -c
Amounts:
Results
Numbers for the lockfiles and patches are
(total bytes)
or(total bytes / number of files = average per file)
Notable non-generated files
For comparison and out of interest I also recorded the compressed sizes of notable files that were made by hand:
Analysis
Lockfiles Contribute greatly to nixpkgs compressed tarball size. In total, you can attribute 8793206 Bytes ~= 8.4MiB out of the ~41MiB to lockfiles used in individual packages (~20%). The biggest offender by far are rust packages'
Cargo.lock
s which are analysed in deeper detail in #327063.The worst offenders in terms of Bytes per package are packages which lock their yarn dependencies at ~130KiB/package. Though these are fortunately rare but still add up to ~600KiB.
The next worst appears to be
bazel_7
which single-handedly requires ~100KiB of compressed data.More notably bloated packages are those which have a
package-lock.json
at ~50KiB/package and electron's twoinfo.json
s combining to ~50KiB.Patches also present significant burden for compressed tarball size. Individually, they're usually quite small but they're very common, adding up to 2.6MiB.
All automatically generated files discovered here (package lockfiles + set lock files) sum up to 19558712 Bytes ~= 18.6 MiB (compressed) which is about half the size of the Nixpkgs tarball.
Discussion
Solutions
There are a few measures that could be taken to reduce file size of generated files:
Summarise hashes (i.e. vendorHash)
Rather than hashing a bunch of objects individually, hash a reproducible record of all objects. This is already the status quo for i.e.
buildGoModule
.Record less info
Some info is not strictly necessary to record for the lock files to function. For each elisp package for instance, at least two commit ids and two hashes are recorded. Commit IDs could probably be dropped entirely here which would reduce the compressed file size by 1/3.
Fetch files rather than vendoring them
Often times, files required for some derivation are available from an online source. Fetching the file rather than vendoring it into the nixpkgs tree reduces the space required to a few dozen Bytes (~32 Bytes for the hash and a similar amount for the URL).
This is especially relevant for patches as those are frequently available elsewhere. Use
pkgs.fetchpatch2
in such cases.Lock an entire package set
Lockfiles usually represent a set of desired transitive dependency versions that some language-specific external SAT solver spat out. These are frequently duplicated because many separate packages use the same libraries but are often not exact duplicates due to differences in upstream-defined dependency constraints.
Instead, it is possible to record one large snapshot of the latest desirable versions of all packages in existence in some ecosystem and have dependent packages use the "one true version" instead of their externally locked versions.
It also provides efficiency gains as dependencies are only built once and brings us closer to what the purpose of a software distribution has traditionally been: Integrate one set of packages.
This approach is used quite successfully by i.e. the
haskellPackages
, measuring at just 133 Bytes per package.This is not feasible for all ecosystems however as just the names of all 3330720 npm packages (no hashes) is ~20MiB compressed and the hashes would be at least another 100MiB. Though perhaps a subset approach could be used; only accepting packages into the auto-generated set that are depended upon at least once in Nixpkgs.
Future work
Amendments
Another solution: External lockfile repo
This is another solution I came up with after publishing and being exposed to some of the reasons why lockfiles are vendored. It often happens because upstream provides no lockfiles themselves but one is necessary for the software to build reproducibly which in our case often times means to build at all.
A lockfile must:
Vendoring lockfiles into the Nixpkgs tree achieves all of these but it's not the only way to achieve that.
For such cases, it would alternatively be possible to store these 3rd-party generated lockfiles in a separate repository and merely fetch them from Nixpkgs. You'd fetch them individually, not as a whole, so the issue of size only affects build time closures which would have been affected either way. (The current issue of lockfiles is that they bloat Nixpkgs regardless of whether they are useful to the user or not.)
This solution would work in cases where lockfiles are only required as derivation inputs (not eval inputs) which I believe to cover most usages of vendored lockfiles in Nixpkgs.
This could even become a cross-distro effort as we surely are not the only distro which requires pre-made lockfiles in its packaging.
The text was updated successfully, but these errors were encountered: