-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define libc targeting plan #69361
Comments
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label. |
Tagging subscribers to this area: @dotnet/area-meta Issue DetailsWe need a new libc targeting plan. We've been able to skip this need for some time since we've used CentOS 7 as our primary build OS. For Arm64, we've used Ubuntu 16.04. Both are not feasible as long term options. More context: dotnet/core#7437 The proposed plan is:
From my perspective, it would be ideal if we could acquire these dependencies from a trusted server in the RHEL ecosystem. The strongest concerns have come from that ecosystem, so it makes sense to orient the solution in that direction. I don't see any downsides to such an approach to other ecosystems. /cc @omajid @jkotas @MichaelSimons
|
Would the Python community's work on the manylinux project be relevant here? This is solving a similar problem, no? Possibly some of their work on the problem may be relevant; |
Excellent question. We actually looked at the Our plan is to do something very similar, except we already have infrastructure that allows us the same outcome w/o quite so much ceremony. Each .NET TFM will target a different libc version. That's the contract. It ends up being the same as |
As we already have the rootfs targeting plan for our cross-architecture builds, it might be worthwhile using a similar model for targeting a down-level libc version (basically cross-compiling from x64 to x64-with-older-libc). |
What does rootfs targeting look like with containers? The rootfs instructions I have seen have all been oriented on bare metal. Does it work the same in containers? I think we want to continue to user containers as our primary build environment. |
We currently use it in containers for our arm32 and arm64 builds (as well as our FreeBSD build). |
Cool. Maybe that will work. It's a question of whether we can acquire the desired rootfs. I'd look to @omajid for that. Got a link to a Dockerfile that does that today? |
cc @tmds |
Tagging subscribers to this area: @hoyosjs Issue DetailsWe need a new libc targeting plan. We've been able to skip this need for some time since we've used CentOS 7 as our primary build OS. For Arm64, we've used Ubuntu 16.04. Both are not feasible as long term options. More context: dotnet/core#7437 The proposed plan is:
From my perspective, it would be ideal if we could acquire these dependencies from a trusted server in the RHEL ecosystem. The strongest concerns have come from that ecosystem, so it makes sense to orient the solution in that direction. I don't see any downsides to such an approach to other ecosystems. /cc @omajid @jkotas @MichaelSimons
|
Any updates on this issue for either .NET 7 or 8 so distributions with lower GLIBC versions can continue to work? |
Yes, so far it looks like we have a path forward to fixing this for .NET 7 and 8, by cross-building for an OS with the older glibc. I'll describe the setup we intend to use for .NET 7 arm64 linux:
@janvorli kindly gave me some pointers and I am working on this. I hope to have a fix soon for .NET 7. For .NET 8 there might be more changes, including using later Ubuntu releases as the host, and/or using Mariner instead for the official build - but the general idea is the same (using cross-compilation to support a low-enough glibc). |
@sbomer That is great news. Sorry to ask the annoying next question but any guess on the timeline for these changes? |
@normj it's a fair question :) I'm aiming to get it fixed in one of the next servicing releases for .NET 7 - probably 7.0.4 (sounds like it is too late to make 7.0.3). And for .NET 8, probably Preview 2. This is assuming there aren't any big unforeseen blockers. I'll post any updates here. |
Great, so if I understand correctly by targeting Ubuntu 16.04 that would set the minimum GLIBC version to 2.23. Is that correct? |
@normj yes, that's correct. |
Please feel free to ping me if you want any early tests of new builds on Amazon Linux 2 which is currently stuck because of its usage of GLIBC 2.26. |
@normj we have a CI build of the .NET 7 runtime binaries with the fix: https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/142699/artifacts?artifactName=CoreCLRProduct___Linux_arm64_release&api-version=7.0&%24format=zip. Would you be able to test using these bits? You can use the instructions at https://github.com/dotnet/runtime/blob/main/docs/workflow/testing/using-your-build-with-installed-sdk.md, but use the .NET 7 SDK together with the binaries from this build. |
@sbomer Yes definitely I'll give the bits a test through. |
@richlander this change might have an impact on third-party .NET projects that carry native libraries on arm64. If they were targeting net7.0 on arm64, until now they could also include shared libraries which needed glibc 2.26 to run. With this change, they might now be expected to provide libraries that run on a lower glibc version. Do you think this deserves a larger announcement and notice to the .NET ecosystem? |
@sbomer I'll keep testing but I wanted to give some initial positive feedback. I was able to deploy an ARM based AWS Lambda function using a self contained publish and substituting in the CoreCLR binaries from the link above. The Lambda function worked perfectly compared to before which failed immediately with the GLIBC 2.27 warning. This was our key scenario that was blocked and is now unblocked. Thank you all for your hard work making this change! |
@sbomer based on doing various deployments, including ASP.NET Core apps, the build looks good for Amazon Linux 2. I was surprised the patch only required updating 3 files (libclrjit.so, libcoreclr.so and System.Private.CoreLib.dll). |
Plan for .NET 8: |
Regarding musl-libc, Alpine 3.13 indicates two things:
In the second form, Alpine 3.13 represents baseline compatibility with musl-libc v1.2.2 for entire set of musl distros: https://wiki.musl-libc.org/projects-using-musl.html. The portable Now that Alpine 3.13 has reached EOL, can we make it clear that we are keeping this version for the purpose of "testing" with baseline musl-libc v1.2.2 and not for the distro support? Otherwise, we can find another distro for musl-libc baseline testing in CI. Compared to glibc 2.23 (release on February 19, 2016), musl 1.2.2 is very recent (January 15, 2021). If anything, we should instead try to lower the requirement to v1.2.0 (February 20, 2020) to match their "Stable vs. EOL" series https://musl.libc.org/releases.html rather than Alpine Linux release cycle. |
That is interesting. We've been targeting Alpine since (perhaps naively so) we thought that the only folks using .NET with musl were Alpine users. We're definitely open to adopting a different plan for musl targeting if it helps folks. For glibc, we are planning to target Ubuntu 16.04 for .NET 8. |
This comment was marked as off-topic.
This comment was marked as off-topic.
We use .NET SDK 7.0.X for building .NET SDK 7.0.(X+1). It is how the old GLIBC reference got into the ilc binary. This should fix itself in the next servicing update (needs to be verified). |
Hello. I am the present maintainer of IKVM. I wanted to add my two cents to this, since we're encounting a similar issue with IKVM. IKVM, as many of you know, compiles Java to MSIL. We distribute a full JDK, based on OpenJDK. Up until now we've been rewriting the native C parts in C#. This has worked "fine". But, we've decided to actually just build and distribute the C as is. Will remove a lot of work keeping C# copies of stuff up to date. So, we have the same issue: building a dozen different native libraries targeting every combination of OS/arch that .NET itself supports. Since, let's say, IKVM supports .NET 8, we should be runnable on every RID .NET 8 supports (we also have to deal with backwards compat, since we need to maintain back to .NET Core 3.1 (and Framework) back to whatever it supports). That's a lot of OS' and architectures. A lot of different versions of glibc. And musl. And whatever. The path we're taking is to build SDKs for each of the .NET target RIDs, with the headers and libraries we require back to the version the various TFMs we support advertised. Including for Windows and Mac OS X. And then use clang to cross compile from whatever the host OS/platform is, against those headers and libraries. This, so far, has worked in tests for every OS/arch we've encountered. We're going to have a separate project called ikvm-native-sdk or some such that produces these SDKs. Windows and Mac are easy. Those are just single bundles that the user has to acquire. But Linux is not easy. But, I've pretty much got it working. So we've got scripts to build a cross compile toolchain. And then use that cross compile toolchain to rebuild GCC, GLIBC, etc. This will give us something like a 'netcoreapp3.1-linux-x64' /include and /lib directory, which we can then tar up. We then use a clang MSBuild project type, which supports a Inner Build structure (just like TargetFramework). For each inner build of this, we point clang to the proper SDK directory. So we can have a single MSBuild project file that builds a dozen or so different targets. Anyways, cool. The key take away here is we're not going to be using old versions of RedHat or Ubuntu, or containers. Because we want our devs to be able to build the entire product, including all of the native libraries for every supported RID, from whatever machine they work from. We can produce Linux shared objects while building on Windows, or Mac, or Windows DLLs while building on Linux or Mac, etc. And we'll have a nice little library of tars that represent the snapshot of the supported APIs at that moment in time. Food for thought. |
Hey @wasabii, thanks for the write-up! Not sure I fully understood your flow. When you say you build a cross-compile toolchain, is this a cross-compile toolchain on your users' machines? Or a toolchain on your own platform? If the latter, does that mean you're producing, say, dozens of versions of your package for each different Linux distro? |
It is the latter. Just the developers need the toolchains. Just a bunch of tars for them to download. And the answer is no, we're producing the set we need to target the various distros .net supports. Careful selection of glibc version, etc. There will be overlaps. There might be a lot eventually. Unknown. So far is a half dozen. But will probably end up more in the future. |
https://github.com/ikvmnet/ikvm-native-sdk/releases/tag/20230613.2 This is a new approach for us. So we haven't gone too deep down into the various distros yet. But the approach seems sound. It will let our developers build everything for all platforms from whatever machine they're on. Including nuget packages with all the native libs in one place. They just need to download a bunch of tars. No docker. No WSL. And it lets us put the C in msbuild. |
Did you get lawyers to confirm that everything in your approach is compliant with all respective licenses? The kind of problems that this approach can run into is touched on in https://github.com/ikvmnet/ikvm-native-sdk/blob/main/macosx/README . |
For the Linux related stuff there is no concern at all. Obviously the Windows and Mac stuff differ in that regard. Windows should be fine, since they don't have a hardware restriction. Apple is a different matter. Regardless, different subject. |
It sounds like your end state is very similar, but you're taking on building glibc -- I think that's the part we don't want to do. Keeping glibc up-to-date and supported is something we want to rely on distro maintainers for, in particular for potential security issues. By building against a supported copy of glibc from a trusted maintainer, we avoid maintaining glibc ourselves. |
Perhaps, except we don't distribute glibc as part of our application. It exists only to link against. Nothing in these SDKs is distributed as part of the application. It's C headers and .so files for lld at build time. It links to whatever the user is running at runtime. It's also why we're not likely to need different SDKs per distro. All that matters to us is that the ABI surface is correct. The only time we'd upgrade glibc is if .NET starts requiring a newer version or something. Something I'm sure you guys avoid. Same way you guys get away building against a super old Ubuntu. |
Also... don't overestimate how difficult it is to actually build a bunch of linux libraries and install them into a tar. Yeah. It's a thing. But it's also not a super complicated thing. It takes a lot of time to build them, but that's CI/CD's problem. |
Note that it's theoretically possible for a vulnerability to exist in those header files. By relying on distro support, we avoid exposure to those files. Understandably, you may have a different risk profile. Also, note that we only use the Ubuntu base libraries (glibc, etc). The actual build is happening in an up-to-date mariner distribution with a current toolchain. |
This work should now be complete |
We need a new libc targeting plan. We've been able to skip this need for some time since we've used CentOS 7 as our primary build OS. For Arm64, we've used Ubuntu 16.04. Both are not feasible as long term options.
More context: dotnet/core#7437
The proposed plan is:
From my perspective, it would be ideal if we could acquire these dependencies from a trusted server in the RHEL ecosystem. The strongest concerns have come from that ecosystem, so it makes sense to orient the solution in that direction. I don't see any downsides to such an approach to other ecosystems.
/cc @omajid @jkotas @MichaelSimons
The text was updated successfully, but these errors were encountered: