Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate implementing header-translator on top of Swift's ClangImporter #345

Open
madsmtm opened this issue Jan 15, 2023 · 39 comments
Open
Labels
A-framework Affects the framework crates and the translator for them enhancement New feature or request

Comments

@madsmtm
Copy link
Owner

madsmtm commented Jan 15, 2023

In header-translator we have to do a bunch of hacks to account for the fact that a lot of things are not exposed to the libclang API. In particular the Swift attributes are desirable, e.g. the new NS_SWIFT_UI_ACTOR will help us a lot in knowing which things require MainThreadMarker before access!

So crazy idea: What if instead we forked/used Swift's ClangImporter and built directly upon their work instead? They already handle all the edge-cases, they have a system for importing .apinotes files, and so on.

Though that would probably require more fluency in C++ than I really have, as well as the hard work of actually figuring out how to interface with it in a nice way, and probably also a beefier computer to compile all that code than the one I currently have :/

@silvanshade
Copy link
Contributor

silvanshade commented Mar 3, 2023

I've started some initial work in this direction here.

Right now the plan is to start building out the bindings to ClangImporter primarily using cxx.

I originally tried using autocxx (which uses a modified bindgen to generate cxx directives) but ran into a problem with what I think was a stack overflow. Apparently there are some known issues where this can happen for particularly complex libraries.

It would have been nice if autocxx had worked but this more manual approach still seems manageable.

Note that I haven't tested this on macOS yet so you would need to modify the build.rs script and update the TARGET appropriately const for that case if you try to build it currently.

@madsmtm
Copy link
Owner Author

madsmtm commented Mar 3, 2023

Cool!

I think I was planning on just rewriting header-translator entirely in C++, but if we can write most of it in Rust and then use ClangImporter through cxx then I'm all ears.

@silvanshade
Copy link
Contributor

There's still a lot left to do before this this thing is usable but there's at least enough in place now to show that we can create a ClangImporter instance: see here.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 19, 2023

As part of this, I've done a small investigation into the current state of tools for automatic binding generation - that is, tools where you do zero or minimal configuration in the host language, and instead let a generator handle the tedious part.

Note that many of these use a helper library like objc2, cxx, wasm-bindgen or swift-bridge to actually handle the calling convention.

Bindings for calling Rust from another language:

Bindings for calling another language/framework from Rust:

Related: c2rust and diplomat.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 19, 2023

To me, it is very apparent that bindgen is lacking, even for C projects, and I think the reason is that they are severely limited by libclang. autocxx is newer, but I suspect they will run into similar issues.

So what if we made a tool for the entire community, built directly upon clang's AST, such that all attributes are available to use (and additional support could be added through attributes instead of special comments)?
That is, we collaborate with autocxx and bindgen, and end up with a tool that can handle any clang invocation equally well, and generates code using objc2 or cxx, depending on the language used.


I first need to investigate what's possible with clang's debug AST output, since that would be a lot simpler, but it is very likely that we'll need to use clang's AST directly to do the binding generation. CppSharp is a good example of doing this.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 19, 2023

Ooh, just found c2rust-clang-exporter, which maybe does enough that we can do everything else from Rust?

@silvanshade
Copy link
Contributor

To me, it is very apparent that bindgen is lacking, even for C projects, and I think the reason is that they are severely limited by libclang. autocxx is newer, but I suspect they will run into similar issues.

I also agree with this based on my experience.

And yes, autocxx already has this exact problem in a number of places. If you search through the repo issues you will see several instances where certain limitations are in place due to bindgen / libclang not providing enough information to produce a better or more precise result.

For the work I've been doing on generating bindings for ClangImporter, I've kind of gone back and forth with regard to using autocxx. Initially, I tried it and didn't have great results, because it often couldn't produce the bindings I needed.

Later, I tried using it again and had a bit more success, but still had to implement a lot of manual fixes or additions with plain cxx, and found that most of the utility that autocxx provided was in the support library functionality (more flexible memory management for handling C++ values) rather than in the bindings it could produce.

Now I've settled on just factoring out that support code into a separate crate and just using plain cxx since it's cleaner overall.

So what if we made a tool for the entire community, built directly upon clang's AST, such that all attributes are available to use (and additional support could be added through attributes instead of special comments)?

I've started thinking along the same lines as I've been working on the ClangImporter bindings.

I think it would be good to (try to) provide a better alternative than what is available with the current implementation that most projects are using based on bindgen / libclang.

That is, we collaborate with autocxx and bindgen, and end up with a tool that can handle any clang invocation equally well, and generates code using objc2 or cxx, depending on the language used.

I think this would be an ideal goal. I'm not entirely sure how realistic it will be in the end for autocxx or bindgen to want to switch over to a new underlying implementation but at the very least it would be worth the effort to try and build such a thing and see what happens.

Ooh, just found c2rust-clang-exporter, which maybe does enough that we can do everything else from Rust?

That looks promising.

Part of what I've been working on with the ClangImporter bindings includes the same AST definitions as exposed there, with the difference being that I've been using cxx to generate them rather than bindgen. The advantage to using cxx is that it's more precise and safer for generating bindings than bindgen, but also much less automatic (unless you also use autocxx, but as I mentioned earlier, I have kind of moved away from that).

Currently I'm focusing on finishing the ASTWalker functionality for visiting declarations exposed by the ClangImporter module loader. Things were delayed a bit in the last couple weeks due to the effort to factor out the autocxx stuff I mentioned earlier, plus finding and fixing some memory safety bugs in autocxx, and then some other misc. things that took longer than expected.

In any case, if you manage to start building something based on the AST definitions from the c2rust-clang-exporter crate, it should be reasonably straightforward to adapt that to what I manage to produce with the ClangImporter bindings once those are ready, if you're still interested in that approach.

One thing to note is that apparently the ClangImporter support for C++ is incomplete, IIRC, based on some comments I saw somewhere (either in the source or in the repo or forums, but can't remember the details just now). But that might have been stated more with respect to the overall process of using it to generate Swift definitions. Since we would mostly be using ClangImporter just for the ability to load modules and then get access to the Clang AST nodes, and do our own codegen, maybe that isn't particularly relevant.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 20, 2023

Oh, you've gone quiet for a while now so I didn't know you were still working on it, that's really nice!

I'll be joining you in the implementation effort some time in August, once I get back to my home country, and acquire a new Mac that is fast enough to compile LLVM stuff without taking minutes.

ClangImporter bindings once those are ready, if you're still interested in that approach.

I'm definitely still interested in using your bindings!

Also, I should say that my general plan for using it looks something like this:
Header Translator Design excalidraw

The important part being that it is called multiple times, with different clang invocations, to be able to translate #if blocks as well - the user would also be able to opt out of that though, and instead integrate it as a build script (or a proc-macro, not sure what's best yet? I think there's value in the build script, since you can more easily inspect the generated files, and it's easier to do dependency tracking / knowing when to rerun).

@silvanshade
Copy link
Contributor

silvanshade commented Jun 20, 2023

Oh, you've gone quiet for a while now so I didn't know you were still working on it, that's really nice!

Yeah, several parts of the project turned out to be a little more difficult to figure out than I expected and I've started over a few times so it's taken a bit longer than expected to get to a point where it is starting to look like something usable.

I'll be joining you in the implementation effort some time in August, once I get back to my home country, and acquire a new Mac that is fast enough to compile LLVM stuff without taking minutes.

Sounds good.

Regarding compilation times, I think it's possible to get things set up to where the overall project compiles reasonably quickly.

Once the LLVM, clang, and swift code has been compiled to static libraries on my local environment, the rust side of things (re)compiles relatively quickly each time. Linking can also be sped up using mold and some other tricks.

So the heaviest part of the build process is fetching, configuring, and building the libs on the C++ side.

Early on, I experimented with pre-packaging all that stuff up in some Docker containers so that at least the CI could be relatively quick (only needing to fetch and recompile the Rust code for each test run). The containers came out to about 6GB with everything built in debug mode with symbols, which isn't too bad. (Although currently I'm compiling those libs in release mode and only care about debug/release on the Rust side).

It's possible to do something similar for the main build process, perhaps also combining with something like CMake's ExternalProject and the cmake crate.

That would probably also make more sense if the plan is to publish the actual tool as a binary through crates.io. But I'm not sure what the best approach there is yet. I would ideally like for the tool to be easy for people to build or install without having to go through several manual configuration steps though.

In any case, I think keeping the build times manageable (in a portable way) is feasible with a little bit of planning.

Also, I should say that my general plan for using it looks something like this: [...]

The important part being that it is called multiple times, with different clang invocations, to be able to translate #if blocks as well - the user would also be able to opt out of that though, and instead integrate it as a build script (or a proc-macro, not sure what's best yet? I think there's value in the build script, since you can more easily inspect the generated files, and it's easier to do dependency tracking / knowing when to rerun).

That seems reasonable. I also think the build script is probably the more flexible approach as well. At least, I would start with that before trying to build a proc-macro version.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 23, 2023

Linking bindgen issues rust-lang/rust-bindgen#1740 and rust-lang/rust-bindgen#297.

An issue discovered there is that it's difficult to bundle clang for Windows, so perhaps the build script / proc-macro solution is out of the question once we stop using libclang. But I consider that bad practice anyhow, since it vastly increases compile time and requires the entire C compiler to run just to get header information, so probably not worth it to try to ever support. Rather, I'd want to put a lot of work into the diffing engine!

@silvanshade
Copy link
Contributor

Linking bindgen issues rust-lang/rust-bindgen#1740 and rust-lang/rust-bindgen#297.

An issue discovered there is that it's difficult to bundle clang for Windows, so perhaps the build script / proc-macro solution is out of the question once we stop using libclang.

Yeah... I think I tried early on to get the bindings for ClangImporter building for Windows but ran into some problems (I don't remember the specifics) and just decided to ignore that for now.

But even if we can't realistically support Windows, I'm not sure why that should preclude supporting a build-script or proc-macro approach. Wouldn't it be okay (if non-ideal) to just say that Windows is unsupported and fail in that case?

But I consider that bad practice anyhow, since it vastly increases compile time and requires the entire C compiler to run just to get header information, so probably not worth it to try to ever support. Rather, I'd want to put a lot of work into the diffing engine!

I'm not entirely sure I understand what you mean here. At least, in order to use ClangImporter, we need to link against a significant portion of the LLVM / clang / swift libraries, and in order to run ClangImporter, the compiler has to be invoked, creating the appropriate context in which to load the modules, right?

Or are you thinking of more along the lines of keeping the tools separate, where the diffing part is a distinct tool from whatever ends up using the ClangImporter bindings?

In any case, I think there may be some way for us to handle the Windows situation but I haven't thought about it too much yet. At the very least, we could probably support it in a somewhat indirect way through WSL (since WSL can still access the Windows FS). In fact, I'm developing the bindings in a WSL instance on my current machine.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 27, 2023

I think I was speaking of a future where we were merged with autocxx / had support for non-Apple platforms, and considering the situation where we would want to start adding support for running in a build script.

And then I was arguing that having a build script generate the bindings for a library is a bad idea for everyone in the ecosystem, since it forces users of the library (not just creators of the library) to have have clang installed/compile parts of LLVM, and that I really think the "run the generator once and commit/cargo publish the result" is the way to go.

Basically: Just me telling myself that the ClangImporter idea is still sound, even though there are cross-platform difficulties.

@silvanshade
Copy link
Contributor

silvanshade commented Jun 27, 2023

And then I was arguing that having a build script generate the bindings for a library is a bad idea for everyone in the ecosystem, since it forces users of the library (not just creators of the library) to have have clang installed/compile parts of LLVM, and that I really think the "run the generator once and commit/cargo publish the result" is the way to go.

Generally, I think what you suggest here is ideal, if possible, in that it would be nice for users to not have to need additional toolchain artifacts installed (which maybe difficult to set up) in order to consume the library.

For Objective-C oriented libraries, I think this is probably fine, and I think icrate already demonstrates this.

For cases where this new tooling may be used for C++ libraries, however, there is a potential complication that may make it more difficult to achieve.

Specifically, one of the differences between cxx and autocxx (which I alluded to earlier wrt to the "more flexible memory management" comment), is that autocxx uses the moveit library, which makes it possible to allocate C++ values on the stack, essentially by creating a MaybeUninit wrapped up with some special types called Slot (stack storage) and MoveRef (which behaves a bit like a Pin<&mut T> to the slot location afterwards, but with an associated drop flag), and then passing a pointer across the FFI boundary, where C++ uses placement-new to allocate in this space.

This is in contrast to cxx, which basically only allows you to allocate into unique_ptr, shared_ptr, or raw pointers.

But in order for autocxx to be able to make the moveit allocation machinery work, it needs to actually know the size and alignment of the C++ types.

Unfortunately, since C++ doesn't have a stable ABI, most of the those size and alignment details will be implementation specific (i.e., dependent upon the user's build environment), and may event vary between toolchain versions.

So this is, I think, at least one reason why autocxx kind of has to do things that way, at least as long as they want to still allow for the (potentially much more efficient) stack allocation versus heap allocation for everything (unless you use raw pointers and a lot more unsafe code) like with cxx.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 29, 2023

Damn, that's really unfortunate!

I guess this could be resolved by working on giving C++ a stable ABI, but that's far out of scope, so it probably makes sense for us to only target Objective-C and plain C.

@silvanshade
Copy link
Contributor

I guess this could be resolved by working on giving C++ a stable ABI, but that's far out of scope, so it probably makes sense for us to only target Objective-C and plain C.

Yeah, that might be a more realistic goal. And in any case, there'd still be room for these tools to adopt whatever we manage to put together if they want to try to switch over from libclang on their own initiative.

There is one other possibility I was thinking of though after I wrote that comment: we might be able to use a mechanism like the diffing you have in mind for the Objective-C platform stuff for this same purpose on the C++ side.

I'm not sure how many targets would be involved, but there is only a (reasonably small) finite number of possible targets to consider. And there's also some support crates in the ecosystem to help with some of this stuff, like link-cplusplus, which lets you choose between libstdc++ (GCC) or libc++ (clang) when linking.

@silvanshade
Copy link
Contributor

silvanshade commented Jul 20, 2023

@madsmtm Could you provide a few small example header files with the kind of data that the current header translator has trouble dealing with and that you are hoping the ClangImporter might handle better?

Asking because I've just gotten to the point now where I'm able to scan through decls after the ClangImporter loads the modules and it would be useful to have some test cases so I know which functionality to focus on implementing.

I have the implementation up at https://github.com/silvanshade/cxx-swift, although not updated with the most recent changes yet. It's usable, but requires manually checking out several of the dependencies from my other repos, and then manually compiling the llvm/clang/swift toolchain (the steps of which are more or less still described at https://github.com/silvanshade/framework-translator).

Also if you try to build the cxx-swift and related code, you'll have to set the SWIFT_PROJECT environment variable first, e.g., SWIFT_PROJECT=<path to the swift project dir> cargo build.

There's still a lot of basic things left to be done but I think the swift-cxx crate with the ClangImporter functionality should start to be really usable in the next couple weeks. I'll try to add informative examples on how to get things working in the included tests so that's where I'd start as far as trying to understand how things are structured.

EDIT: You can see how the module loading and access to the clang decls works starting with this test.

The test creates two example modules with the following data:

A.h

int
foo();

int
baz();

int
qux();

struct s {
    int n;
};

@interface TheClass
- (TheClass *)initWithSomeDatumX:(int)x andSomeDatumY:(int)y;
@end

@protocol TheProtocol
- (void)doSomethingWithTheClass:(TheClass *)someInstance;
@end

#define SQUARE(n) (n*n)

B.h

int
bar();

module.modulemap

module M {
    header "A.h"
}
module N {
    header "B.h"
}

When run, it loads the modules M and N from the module map and produces the following output by dumping the swift lookup tables for the underlying clang modules (and the clang AST NamedDecls can be obtained from these lookup tables for later traversal):

swift module: processing
swift module: successfully loaded
swift module: found underlying clang module
swift module: successfully loaded swift lookup table for clang module
swift module: processing base names from lookup table

name: __builtin_va_list
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: Class
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: 
entry: <NamedDecl>
kind: ObjCMethod
<successfully casted to ObjCMethodDecl>

name: __int128_t
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: baz
entry: <NamedDecl>
kind: Function
<successfully casted to FunctionDecl>

name: doSomethingWithTheClass
entry: <NamedDecl>
kind: ObjCMethod
<successfully casted to ObjCMethodDecl>

name: qux
entry: <NamedDecl>
kind: Function
<successfully casted to FunctionDecl>

name: Protocol
entry: <NamedDecl>
kind: ObjCInterface
<successfully casted to ObjCInterfaceDecl>

name: doSomething
entry: <NamedDecl>
kind: ObjCMethod
<successfully casted to ObjCMethodDecl>

name: TheClass
entry: <NamedDecl>
kind: ObjCInterface
<successfully casted to ObjCInterfaceDecl>

name: foo
entry: <NamedDecl>
kind: Function
<successfully casted to FunctionDecl>

name: s
entry: <NamedDecl>
kind: Record
<successfully casted to RecordDecl>

name: SQUARE
entry: <ModuleMacro>

name: id
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: n
entry: <NamedDecl>
kind: Field
<successfully casted to FieldDecl>

name: __uint128_t
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: TheProtocol
entry: <NamedDecl>
kind: ObjCProtocol
<successfully casted to ObjCProtocolDecl>

name: SEL
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: __builtin_ms_va_list
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: __NSConstantString
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>


swift module: processing
swift module: successfully loaded
swift module: found underlying clang module
swift module: successfully loaded swift lookup table for clang module
swift module: processing base names from lookup table

name: __builtin_va_list
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: bar
entry: <NamedDecl>
kind: Function
<successfully casted to FunctionDecl>

name: Class
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: __int128_t
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: id
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: Protocol
entry: <NamedDecl>
kind: ObjCInterface
<successfully casted to ObjCInterfaceDecl>

name: __uint128_t
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: SEL
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: __builtin_ms_va_list
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

name: __NSConstantString
entry: <NamedDecl>
kind: Typedef
<successfully casted to TypedefDecl>

@madsmtm
Copy link
Owner Author

madsmtm commented Jul 29, 2023

Things that are really hard with libclang:

  • Attributes (__attribute__((ns_returns_retained)), __attribute__((swift_attr("@UIActor"))), __attribute__((swift_name("MyClass"))), ... - usually added via. macros).
  • Type information (__kindof, nullability, const, attributes on types).
  • Modules, including parsing the information in the module.modulemap file.
  • Converting inline expressions (used by inline functions, enumerations and constants).
  • API notes (FrameworkName.apinotes).

It at least looks like the module loading part works, which is just wonderful news! Though I may be mistaken, it seems like the order of things in the file gets jumbled, which is unfortunate, but if not avoidable then we'll have to live with it.

@madsmtm
Copy link
Owner Author

madsmtm commented Jul 29, 2023

So a test case exercising a few of these could be something like:

// A comment

//* A doc comment */
__attribute__((enum_extensibility(open))) enum TestEnumExtensibility: int {
    TestEnumValue = 1,
    TestEnumValueExpression = 1 + 1,
    TestEnumValueExpressionComplexExpression = TestEnumValue + TestEnumValueExpression,
} __attribute__((swift_name("MyTestEnum")));

@interface TestClassWithGeneric<__covariant GenericType>

- (void) methodTakingComplexFunction: (NSInteger (__attribute__((noescape)) *)(GenericType, void * _Nullable)) myFn;

@property const char* propertyWithAttribute __attribute__((objc_returns_inner_pointer));

- (instancetype)initDesignated __attribute__((objc_designated_initializer));
@end

If we can also somehow make use of the information in MappedTypes.def and CFDatabase.def in the future, instead of having to reimplement that, that'd be nice!

@silvanshade
Copy link
Contributor

silvanshade commented Jul 29, 2023

Things that are really hard with libclang:

  • Attributes (__attribute__((ns_returns_retained)), __attribute__((swift_attr("@UIActor"))), __attribute__((swift_name("MyClass"))), ... - usually added via. macros).
  • Type information (__kindof, nullability, const, attributes on types).
  • Modules, including parsing the information in the module.modulemap file.
  • Converting inline expressions (used by inline functions, enumerations and constants).
  • API notes (FrameworkName.apinotes).

Thanks, that should give me a better idea of which parts of the API to focus on.

Though I may be mistaken, it seems like the order of things in the file gets jumbled, which is unfortunate, but if not avoidable then we'll have to live with it.

Yeah, I think there isn't any guarantee about the order in which the entries are presented from the Swift lookup tables.

It is theoretically possible to use the AST walker/visitor functionality instead (either on the Swift AST side, or the Clang AST side), but there are number of problems with that approach.

First, just having a faithful encoding of the C++ API for that functionality available in Rust through FFI has turned out to be more difficult than I expected, mostly due to various problems encoding the corresponding virtual class hierarchy and its heavy use of templates, so I put aside efforts to add bindings for that for the time being.

But I'm not sure if the declarations as they would be synthesized in the Swift module would actually maintain the order from the C headers or not, and I'm also not sure if that is true of the Clang modules (synthesized from the module.modulemap files).

It may be that the only guaranteed way to maintain the order of the declarations as they appear in the headers is to traverse the AST of the headers themselves, in which case I think we kind of lose the advantage of working with modules to begin with, although we should still have more fine-grained control over accessing the AST than we do now with libclang.

In the end though, it may not really matter much for us either way:

The clang modules will have already been fully loaded by the time the lookup tables are available, which means that as we iterate through the individual declaration, we can always obtain the related entities (e.g., class for the method, or whatever) by using the accessor methods for clang::Decl and its subclasses. We could use that approach to populate our internal tables on-demand, and just skip declarations that have already been processed if they are encountered again.

If we want the generated code to roughly follow the same order as the declarations in the C headers, we could re-create the original ordering by looking up the source location for each decl we encounter and using that as a sorting key.

@madsmtm
Copy link
Owner Author

madsmtm commented Jul 30, 2023

I am not sure I've said this before, but I honestly think you may have be shooting yourself in the foot by going the C++ -> Rust route with this; I suspect the project would have been a lot easier to just write in plain C++. But done is done, and I get that the other may be more fun (for me too), so I won't say anything else about it.

What I'm using as a sort of base-line is that XCode has a (somewhat hidden) feature where you can view a Swift-translated version of header files; I don't think that part is open-source, but it tells me that everything there is definitely possible to do. And this feature retains the ordering of items correctly.

But again, the ordering is not that important, it's just a nicety when debugging, and makes it easier to understand the source, but definitely something that we can add later. So don't worry about it for now!

@silvanshade
Copy link
Contributor

silvanshade commented Jul 30, 2023

I am not sure I've said this before, but I honestly think you may have be shooting yourself in the foot by going the C++ -> Rust route with this; I suspect the project would have been a lot easier to just write in plain C++. But done is done, and I get that the other may be more fun (for me too), so I won't say anything else about it.

It's a fair point. Certainly, building the FFI to the C++ API is very complex and has required a lot of effort.

But on the other hand, when I started this attempt, I didn't nearly have the required C++ knowledge to have built such a tool. Early on, even reading some parts of the C++ source was difficult. I think having the goal in mind to understand things well enough to translate to Rust forced me to learn a lot of very subtle aspects of C++ programming.

Along the way I managed to learn enough to be able to fix some long-standing bugs in autocxx and to even build some sophisticated auto-translation layers on top of cxx that don't require bindgen but instead use newer C++ template metaprogramming features (stuff I wouldn't have understood at all a few months ago).

In the end, although it has taken longer, I think it may lead to a better overall outcome. I learned more than I would have otherwise. And I'm still hoping that whatever we build might be useful for people that need something with more features than bindgen.

But having said that, I'm not opposed to the idea of writing something in pure C++ now if it turns out we run into insurmountable issues with interfacing the API with Rust.

What I'm using as a sort of base-line is that XCode has a (somewhat hidden) feature where you can view a Swift-translated version of header files; I don't think that part is open-source, but it tells me that everything there is definitely possible to do. And this feature retains the ordering of items correctly.

But again, the ordering is not that important, it's just a nicety when debugging, and makes it easier to understand the source, but definitely something that we can add later. So don't worry about it for now!

I believe I have seen this part in the code base you are referring to actually.

Take a look at printHeaderInterface in swift/include/swift/IDE/ModuleInterfacePrinting.h and then the implementation of the function in swift/lib/IDE/ModuleInterfacePrinting.cpp.

In fact, in this part, it looks like they are even using the method I suggest (i.e., sorting the decls by source location):

// Sort imported declarations in source order.
std::sort(ClangDecls.begin(), ClangDecls.end(),
          [&](Decl *LHS, Decl *RHS) -> bool {
            return ClangSM.isBeforeInTranslationUnit(
                                          getEffectiveClangNode(LHS).getLocation(),
                                          getEffectiveClangNode(RHS).getLocation());
          });

@madsmtm
Copy link
Owner Author

madsmtm commented Jul 30, 2023

looks like they are even using the method I suggest

Huh. Well, let's do that then!

Along the way I managed to learn enough

Yeah, that's partially also why I haven't started, so a good long look at your Rust code would probably help me along too. I'll try to get to it in the coming week, and definitely will get started before August is over (my new computer is arriving in a few weeks, so I'm a little reluctant to do a lot of setup I'll just have to redo... But then again, I might learn something so I can do it better next time).

@silvanshade
Copy link
Contributor

I've updated the cxx-swift repo with instructions on how to build for both macOS and Linux, and I've also included a devcontainer which makes it easy to launch a pre-built environment that allows one to avoid all the long and complicated build steps and instead just able to clone the repo and quickly get started with development.

At this point the crate is starting to become usable and I'll probably start adding more functionality a lot more quickly now.

Over the next week I plan to get the CI set up and work on some small refactorings for the lower level libraries and then focus on adding bindings for the stuff you mentioned in the previous post with the example you gave.

I think what I'll do is try to build a small proof of concept tool in the framework-translator (which is currently completely outdated, so ignore the README) using cxx-swift, rather than try to adapt header-translator just yet, mostly because it should be a little easier to experiment with the design that way until I have a better idea about what works, and then we can figure out where to go from there.

I'd also like to perhaps experiment at some point with generating the final rust code directly from the translator rather than producing the macro output as an intermediate step. I have most of the code needed for that already written from the proc-macro attempt I made awhile ago, and it would be interesting to see what kind of difference that might make.

Anyway, let me know if you have any issues trying to build or questions about using the crate whenever you start looking into that. Hopefully I'll have some more in-depth example code to show soon.

@silvanshade
Copy link
Contributor

Just wanted to give an update on the cxx-swift status:

I've been focusing on refactoring and refining the build process and parts of the lower-level crates in order to enable a smooth installation process and CI pipeline. That has been going well but has taken a bit longer than I expected due to the overall complexity of Swift toolchain build process and related factors. (I've had to take a bit of a detour and learn more about CMake than I ever intended, for instance).

As I made progress toward that, and started to enable CI for a few of the crates, I realized it would be a good idea to make a few more radical refactorings to the build process now in order to make things much more robust to future changes and easier to maintain across platforms.

My current thinking is to scrap the Docker devcontainer stuff entirely both for the "easy" development option and for CI and instead switch over to using homebrew "bottles" to package up the pre-built dependencies. This should be feasible since homebrew already has formulas for both LLVM 16 (with clang) and Swift 5.8.1 on both macOS and Linux. So it's just a matter of adapting the current formulas for those for this specialized use case and installing the built artifacts in a different prefix.

This has a number of advantages over the previous approach I was using.

One major advantage being that the development process on macOS (which would realistically be our primary platform) will be much smoother and the "easy" development option simply requires a brew install cxx-swift rather than having to deal with the quirks of developing in a Linux-based devcontainer.

Another advantage is that it should be easy enough to be able to cargo install header-translator once the user has installed the homebrew bottle and there's already precedent for this approach for other crates that depend on native libraries (e.g., openssl).

Unfortunately, that also means it's going to be a little longer until the crates are ready to start iterating on adding in more features again. It will be worth it though I think, and based on the progress toward that goal, I'm hoping that things will be back on track by the beginning of September. I'll update this thread again once there's more news about that.

@madsmtm
Copy link
Owner Author

madsmtm commented Aug 28, 2023

For my part, I'll either install things a bit manually, or use nix to do it, so don't worry about me in that equation; but having a simple installation that doesn't require development containers sounds really nice, would definitely help others getting started.

I've begun on setting your repos and such up locally, could you tell me about the moveref crate? How does it differ from moveit?

@silvanshade
Copy link
Contributor

Sounds good. Just as a word of caution, there might be a bit of breakage over the next few days still but the build-system rewriting is going pretty well so things should be back on track soon.

As far as moveref is concerned, it is basically a partial fork / reimplementation of moveit.

Originally, I started working on it because I had fixed some soundness issues in moveit (related to unsound mutable access to certain Pin types and cxx::UniquePtr) and it wasn't clear at the time that the PR I had submitted would actually land because the project is not actively maintained.

I needed that for the cxx-* crates (having decided to take that part from autocxx, but not the rest of the bindgen based functionality) and wanted to be able to publish those so the most practical solution seemed to be to begin maintaining a fork.

But at that point I decided to just refactor most of the rest of the code, removing some functionality that I didn't feel was needed (e.g., regarding different flag types) and making some other parts safer and perhaps a little more obvious. I also ended up changing the macro interface a little.

Overall, it works more or less the same way as moveit, but the intention is that it's more actively maintained and I plan to evolve the functionality over time to fit with the needs of cxx-auto. I also have a little more rigorous testing pipeline in place for it since I test with miri and valgrind in CI.

The cxx-auto crate is something I created originally to deal with the issue of not having a stable ABI for C++, if you remember the discussion from before.

It uses a phased approach where you declare some data about the C++ classes that will be used in a series of .json files, and then it takes that information and combines it with some C++ template metaprogramming to generate a series of FFI bindings that provide information about the class.

For instance, if you have some crate called cxx-foo, you'll also have a cxx-foo-auto (which is corresponds to the sort of ABI bootstrapping phase I just described) and then another cxx-foo-build crate which is some common build.rs definitions you will use for both of those.

You can see an example of the sort of information it generates here:

Recently I've been extending cxx-auto to generate a lot of common trait impls as well, for example:

Most of the important basic traits area covered pretty well now. I have plans to auto generate Iterator impls and a few other things as well (partially done already) once I finish up the build process refactoring and get back to the bindings.

@madsmtm
Copy link
Owner Author

madsmtm commented Aug 28, 2023

Cool, thanks for the explanation, I'll probably come back with more questions as I get further along. And no worries about the breakage, I still need to learn how LLVM, Clang and Swift works, and read a bunch of C++, which is going to take me a while anyhow.

A recommendation though: Put everything that you can in one repository, makes it much easier to maintain (and for others to contribute).

Btw, I'm glad to see that you have such a focus on soundness, that's really important to me as well.

@silvanshade
Copy link
Contributor

A recommendation though: Put everything that you can in one repository, makes it much easier to maintain (and for others to contribute).

Yeah, that's a good suggestion.

I've kind of gone back and forth with regard to how to structure the crates. Originally, I did have most everything in a single repo, but I started to feel overwhelmed with the complexity of it. Refactoring was often difficult.

Breaking things up into separate repos forced me to clean up the messier parts and modularize the organization a lot. In the end, I think it helped clarify the design goals also.

But I admit things are too scattered around right now. I will try to consolidate more soon. I'm also thinking about putting all of remaining central crates under an organization just to help with the visibility.

I've put it off for a bit since a few of the crates I've created recently ended up becoming redundant and unused shortly after I created them and I didn't want to clutter the main repos too much with the churn from that. But since the design is starting to settle, I'll focus more on consolidation again.

@silvanshade
Copy link
Contributor

Another status update regarding the cxx-swift and related crates:

I'm still making progress on the refactoring with respect to the build process and related issues like portability and maintainability.

Things have again taken longer than expected due to unanticipated complexity, which is largely unavoidable just given the nature of interfacing with the LLVM, Clang, and Swift toolchains, regardless of the choice of Rust or C++ for implementation.

What I've done is switched from the more hand-crafted build.rs scripts I was using over to an approach that delegates most configuration and environment detection functionality to CMake. The new build.rs scripts essentially invoke CMake to generate a context file after the CMake configure script, and then uses the information from that to drive the rest of the build.

This has several advantages, including the fact that we can tie directly into the CMake build scripts for the LLVM, Clang, and Swift libraries, which gives us access to all of the targets those scripts generate along with all the compiler and linker configuration settings.

Previously I was hardcoding a lot of that information in the build.rs, which in addition to being brittle, also wasn't entirely accurate in a number of cases, and also couldn't easily account for things like configuration changes per-target according to adjusting build features for the various involved libraries, or subtle platform differences for that matter.

Additionally, I've written some high-level CMake modules that encapsulate common functionality needed for those crates, which also handles things like detection of additional development tools, to aid in a comfortable development and CI workflow.

After a lot of experimentation, I've also switched away from trying to rely on packagers like Homebrew and Scoop for prebuilt artifacts and have instead decided to focus on using vcpkg for that.

This has a number of advantages, including the fact that it integrates seamlessly with CMake, and also provides an easy way to fetch all of the prerequisites needed to build various libraries (like cmake, ninja, etc) along with various libraries (like ncurses, zlib, zstd).

The dependencies can be fetched local to the specific crate (using a vcpkg.json manifest), but the built artifacts are cached system-wide, which is important for efficiency. And it also supports caching the compiled artifacts as GitHub packages, enabling us to use CI to generate caching, which would make even first-time builds significantly faster for new contributors.

Importantly, vcpkg, unlike some of the alternatives I considered, works consistently across macOS, Linux, and Windows. In fact, I've been able to get most of the way toward a working Windows build of these crates, which is something I wasn't sure would be feasible before.

The end result of the refactoring to using CMake more directly along with vcpkg means that we won't require any additional configuration steps or installed pre-requisites prior to invoking cargo build. Not even vcpkg is required to be installed. The build.rs script will instead clone the vcpkg repo local to the crate workspace and bootstrap vcpkg (both of which are fast), fetch whatever tools are needed like cmake and ninja and whatever libraries are needed to compile the LLVM, Clang, and Swift toolchain libraries, and then proceed with the Rust compilation. And caching will be used seamlessly along the way wherever possible to speed up compilation.

The overall workflow will be much simpler for new contributors and easier for us to maintain since the compilation environment will be consistent and isolated from many differences due to platform and packager configuration choices.

I'm still in the process of putting all of the pieces together and updating the current repos but am hoping that should be ready before too much longer. I probably won't have the full caching solution in place for a while longer still since that will require more experimentation. But I'm comfortable moving forward with the rest now, knowing that robust caching functionality is available.

@silvanshade
Copy link
Contributor

Thought I should give another progress update on this:

I spent the last few weeks doing a lot of experimentation along the lines I described in the last post. I managed to get most of the vcpkg approach working, along with full caching, and even found a nice solution for automatically building cached artifacts from CI and hosting them in a public registry.

However, in the end I wasn't quite satisfied with that solution.

The first problem is that I discovered the vcpkg caching is more sensitive to the host development environment than I expected, even after several efforts to mitigate this and insulate the toolchain build from those factors.

For example, I explored building a multi-stage bootstrapped toolchain, statically linked, and distributed with a complete set of LLVM toolchain runtime artifacts (e.g., builtins, compiler-rt, libcxx, libcxxabi, libunwind, etc.), even exploring statically linking against musl or llvmlibc, so not even glibc would be needed on the host. After bootstrapping in this way, I removed the compiler toolchain from the part that computed the ABI caching keys for the vcpkg package installs.

But ultimately that led to a larger distribution than I was happy with, and even still, there remained some overall fragility in other parts of the vcpkg caching solution, which is just kind of unavoidable due to ABI compatibility issues across so many parts of the host environment.

The possibility that a user might still experience a build time of a couple dozen minutes, versus only a few minutes, without much idea of why, wasn't really acceptable to me. This is made worse by the fact that you can't really report detailed information interactively during build.rs in a robust way.

Another issue I encountered is that the vcpkg LLVM package is kind of complex and messy and doesn't quite provide the configuration options I needed, and it's also still on version 15 (although there is a PR for 17). This meant I needed to implement and my own version. I managed to do that without too much issue but didn't like the idea of having to maintain another relatively complex artifact just for this single use case.

I also found that, even with everything fully cached, there was still a few minutes of overhead just to assemble and initialize the toolchain, just due to all of internal moving parts of this approach (e.g., fetching and bootstrapping vcpkg, all the individual starting of processes, individual unarchiving, plus the overhead from CMake, plus the overhead from cargo, etc).

I eventually went back to the drawing board and started over from scratch, this time using a much simpler, but less isolated approach.

What I'm doing now is using CMake's ExternalProject functionality to accomplish something roughly similar to what a from-scratch vcpkg build would have done, e.g., fetching, assembling, patching, building, and then packaging the parts of the LLVM + Clang + Swift toolchain we need, but doing this explicitly beforehand in an automated fashion and providing these artifacts directly through GitHub releases, instead of relying on caching to maybe fetch them, or maybe not, depending on various subtle environment factors out of our control.

Currently I have this working on macOS, Linux, and Windows, producing a package with all of the headers and libraries from Clang, LLVM, and Swift. The build is heavily optimized to build exactly the components we need, and nothing else, and produces an archived, location-independent package around 250MB. The libraries are also statically linked and compiled with thin-LTO, so we can use them with Rust's linker-plugin-based LTO for cross-language LTO, so we can essentially be just as efficient as a native C++ implementation.

Getting this to work took a bit more effort that I anticipated, since the Swift libraries are not quite designed (with respect to the provided CMake configuration) to be exported and used in a relocatable fashion outside of the apple/swift source tree. But I managed to put together a series of patches to fix that, along with some similar, but less severe portability configuration issues with LLVM and Clang. I will try to get those patches upstreamed a little later on.

The next step is to clean up a few rough edges in the build script, then start building and providing these packages built from CI and hosted as releases for the associated GitHub repo. I plan to provide packages for Linux (GNU), macOS (Darwin), and Windows (MSVC) OS/ABIs, individually for x86_64 and aarch64 hosts, and with targets configured for x86_64 and aarch64 (for Apple Silicon and portable Apple targets). It will be trivial for someone to run the build script locally to produce packages for other host/target combinations as needed.

Once that is accomplished, I will modify the cxx-auto crates to download and cache these tooling packages (in ~/.cache/cxx-auto) during build.rs, which should be quite fast even the first time around, with the only additional overhead being some CMake processing to locate the libraries, load the associated C++ configuration details, and expose that to build.rs, all of which should take just seconds.

One potential additional complication, however, has to do with linking these libraries since they are compiled with clang-specific LTO. In order to not require that the user has a clang toolchain installed, I'm considering also shipping lld with these package archives, or potentially the zig compiler instead.

The zig compiler sounds like a surprising choice for this, but in fact it's quite useful as a portable, isolated clang toolchain, and provides a drop-in replacement clang, ar, lld, etc., along with several libc distributions, and somehow still manages to only weigh in at less than 100MB archived. It's quite adept as a cross-compiler and linker, and in fact, there's already some cargo tooling designed to use it in exactly this way with cargo-zigbuild.

It's taken quite a bit of effort to get to this point but it appears that we can have an essentially transparent process for installing and building all of this stuff without the user having to worry about configuring anything outside of their usual Rust environment.

Although it has taken a lot longer than I had hoped, I think it will be worth the effort to have gotten here since it makes the idea of a more capable replacement for bindgen fairly plausible. And since this approach is working fine on Windows too now, that should also address the earlier concerns about availability of the necessary libraries there (we just provide them ourselves).

@silvanshade
Copy link
Contributor

silvanshade commented Oct 27, 2023

Another update:

The toolchains build scripts are now up at https://github.com/llvmup/toolchains

I have the CI workflow building the toolchains automatically: https://github.com/llvmup/toolchains/actions/runs/6668236820/job/18123391704

The builds take awhile until the ccache is warmed up but should be faster the second time around.

I should have the release tarballs being deployed in the next few days. These should look like the following:

 80M clang-llvmorg-17.0.3-x86_64-linux-gnu.tar.zst
141M llvm-llvmorg-17.0.3-x86_64-linux-gnu.tar.zst
 52M mlir-llvmorg-17.0.3-x86_64-linux-gnu.tar.zst
 32M tools-llvmorg-17.0.3-x86_64-linux-gnu.tar.zst

Once that's ready, I will update the cxx-* crates to automatically pull those as needed during build time. Should be a seamless process hopefully.

EDIT: some of the distributions are uploaded now under releases: https://github.com/llvmup/toolchains/releases

@madsmtm
Copy link
Owner Author

madsmtm commented Dec 7, 2023

I hadn't properly read through your comments here before now, wow, what a ride you've been on with so many build systems, settling on CMake in the end sounds like the correct decision.

Really, thank you so much for doing this work, it is so nice that you have such a huge focus on making this installation path seamless for new contributors!

providing these artifacts directly through GitHub releases
...
modify the cxx-auto crates to download and cache these tooling packages
...
not require that the user has a clang toolchain installed

While the user-experience is slightly nicer, I think it's a bad idea to download things in build scripts, since it makes it harder for people to do e.g. security verification, you have to pull in extra dependencies to fetch and unpack the package from the network, and it breaks things like Cargo's --offline flag.

I see that you've created llvmup, I think requiring users of the cxx-* crates to run a command like llvmup toolchain add llvm-17 is a better way forwards.

All of that said though, have you considered shipping the tarballs via. crates.io instead? E.g. something like a set of llvm-prebuilt crates, that works similar to how windows-sys does its target-specific stuff, and which cxx-llvm-auto could depend on:

llvm-prebuilt-aarch64-apple-darwin/
  include/
    .. snip ..
  lib/
    .. snip ..
  build.rs
  Cargo.toml
llvm-prebuilt-x86_64-apple-darwin/
llvm-prebuilt-x86_64-linux-gnu/
.. snip ..

This way, we could also potentially in the future support building from source (although this would mostly be an option for the pedantic, building from source will always take a long time) (the openssl crate also supports a "vendored" cargo feature that builds from source).

If you think this is possible, then I'll go and ask the crates.io team how they feel about us uploading fairly huge binary files to their servers.

Related: There was the whole serde_derive shipping a binary debacle a while ago, which highlights that if we're to ship any binaries, the build steps must be 100% reproducible.

shipping lld

I'm pretty sure rustup already ships that as rust-lld (available under a bit of a weird path: ~/.rustup/toolchains/stable-$TARGET/lib/rustlib/$TARGET/bin, so unsure if that's actually considered stable?).

zig compiler

This would be for compiling C++ files required by cxx-*/header-translator, right? It might be interesting in the future, though I think requiring the user to have a working clang installed to use this is a fair requirement - at the very least as a starting point!

more capable replacement for bindgen fairly plausible

That sounds awesome, I'm all for header-translator ending up as just being bindgen!

@silvanshade
Copy link
Contributor

silvanshade commented Dec 7, 2023

I hadn't properly read through your comments here before now, wow, what a ride you've been on with so many build systems, settling on CMake in the end sounds like the correct decision.

Really, thank you so much for doing this work, it is so nice that you have such a huge focus on making this installation path seamless for new contributors!

Thanks.

I wish the process to get this put together hadn't taken quite so long but honestly I couldn't really find another way to build something maintainable and portable around the LLVM ecosystem libraries.

While the user-experience is slightly nicer, I think it's a bad idea to download things in build scripts, since it makes it harder for people to do e.g. security verification, you have to pull in extra dependencies to fetch and unpack the package from the network, and it breaks things like Cargo's --offline flag.

That's a good point.

One possibility I've been considering is allowing the dependencies to be fetched automatically by default, but allowing that behavior to be overridden (either through environment variables, or feature flags), to disable binary downloading (or to provide a different fetch location, perhaps from a local source).

There are two main reasons I can think of for keeping that the default:

  1. It's the easiest thing for users to just try out the libraries, if they don't care about setting up all the other stuff

  2. It would allow us to have the crates build seamlessly on docs.rs

I'm not entirely convinced either are that important, but the latter would sure be convenient.

Building and hosting the libraries locally isn't a huge problem, but I couldn't figure out how to get doc.rs to forward to the repo-local documentation link, instead of displaying the "build failed" page, which is annoying.

I'd also mention that if this were to be the default, there would definitely be a Security section in the README and docs noting that -- especially for production use -- it would be better for users to disable that and either fetch the binaries themselves or (even better) build the artifacts locally and use those.

I see that you've created llvmup, I think requiring users of the cxx-* crates to run a command like llvmup toolchain add llvm-17 is a better way forwards.

I'm open to this possibility. One downside is that, while most of the llvmup library portion is working now, I haven't done much work on the CLI yet, so that will take a little longer.

What I have working now:

  • llvmup/toolchains is stable, producing the components and manifests for most of the tier 1 platforms
  • llvmup/llvmup (library) can fetch the components and manifests and produce the Cargo.toml features (similar to how they are produced for icrate, where the llvm component libraries are mapped to individual features) and build_llvmup.rs (with linker directives gated by the respective features and platform)

What remains is to do some refactoring to allow the manifest processing to combine the processed results for multiple platforms (currently it only handles one at a time), and there's also some bugs I'm trying to resolve in the component installer.

All of that said though, have you considered shipping the tarballs via. crates.io instead?

Hmm, I hadn't thought of that. It's an interesting idea, and could make some things simpler though I suppose.

With the windows-sys approach, does that only the platform relevant artifacts would be downloaded for the target platform at build time?

If you think this is possible, then I'll go and ask the crates.io team how they feel about us uploading fairly huge binary files to their servers.

If you are willing to contact them and inquire about that, I think that would be fine, at least just to know their stance on the issue.

So I guess what you would want to ask them is whether it would be feasible to upload a crate containing the (patched) LLVM and Swift source trees?

Related: There was the whole serde-rs/serde#2538 debacle a while ago, which highlights that if we're to ship any binaries, the build steps must be 100% reproducible.

Yeah, definitely relevant. I missed that originally but ran across some of the discussion a few days ago.

Unfortunately, this is potentially a problem here too, and I created an issue about it: llvmup/toolchains#9

I think we could theoretically achieve 100% reproducible builds for Linux, but I'm much less confident about the ability to do that for macOS and Windows. And even for Linux, it would still be a lot of additional work to accomplish.

I'm pretty sure rustup already ships that as rust-lld (available under a bit of a weird path: ~/.rustup/toolchains/stable-$TARGET/lib/rustlib/$TARGET/bin, so unsure if that's actually considered stable?).

Oh, interesting.

I wonder why the docs (particularly about linker-plugin-lto) don't mention that. In any case, I'll see if we can just use that.

This would be for compiling C++ files required by cxx-*/header-translator, right? It might be interesting in the future, though I think requiring the user to have a working clang installed to use this is a fair requirement - at the very least as a starting point!

That was the idea, yeah. Theoretically, it would make cross-compiling easier, and it's a relatively compact distribution.

However, I since decided against using zig, at least for the moment.

The problem I ran into is that it seems that zig cc and zig c++ compile everything with debug symbols by default and there's no way to disable that through their clang-compatible frontend.

I'm not sure if it's just ignoring the respective arguments (e.g., -g0) or what exactly. And that also made me worry about other potentially different behavior than expected when using as a replacement for clang.

So what I opted for instead is to just ship the clang binaries as another (optional) part of the llvmup toolchain components (e.g., https://github.com/llvmup/toolchains/releases/download/llvmorg-17.0.6%2Brev2/tool_clang-llvmorg-17.0.6-x86_64-linux-gnu+rev2.tar.xz).

As long as the user has a working Rust toolchain (for env -darwin, -gnu, -msvc), we don't need most of the other usual stuff that gets installed for a standard llvm+clang deployment.

Also these tool_clang archives are only around 25-30MB, which is even more compact than the zig install.

That sounds awesome, I'm all for header-translator ending up as just being bindgen!

That would be really nice.

@silvanshade
Copy link
Contributor

There's one other perspective to consider that I forgot to mention with regard to download-during-build vs fetch-via-cli for the artifacts:

Unfortunately, because the LLVM C++ libraries don't have a stable API, we will probably need to update the LLVM components more frequently than one might expect.

This is especially going to be true across major LLVM versions, but at the moment, I even expect that we will need to update the libraries for each LLVM minor release, and then I've also provisioned for tweak releases (the +revN) to account for changes we might need to make locally for our built artifacts.

I think I was a little concerned that requiring the user to manually update the llvmup components via the CLI almost every time they update their cxx-* cargo dependencies might be too much of an annoyance.

Any thoughts on that?

@madsmtm
Copy link
Owner Author

madsmtm commented Dec 8, 2023

[downloading automatically allow us to] have the crates build seamlessly on docs.rs

I don't think that's true, docs.rs doesn't actually allow network access. Don't worry about that part though, regardless of what we choose I can help you with setting it up so that docs.rs still works, they have several flags for that (DOCS_RS env in build.rs and options for a docs.rs-specific feature flag).

With the windows-sys approach, only the platform relevant artifacts would be downloaded for the target platform at build time?

Yup!

[asking crates.io team] So I guess what you would want to ask them is whether it would be feasible to upload a crate containing the (patched) LLVM and Swift source trees?

I'll both ask them about uploading (fairly) huge source trees, and (fairly) huge prebuilt binaries. EDIT: Done, see rust-lang/crates.io#7680.

requiring the user to manually update the llvmup components via the CLI almost every time they update their cxx-* cargo dependencies might be too much of an annoyance.

Good point! Perhaps let's table that for now, and see if the crates.io path is feasible.

@silvanshade
Copy link
Contributor

I don't think that's true, docs.rs doesn't actually allow network access. Don't worry about that part though, regardless of what we choose I can help you with setting it up so that docs.rs still works, they have several flags for that (DOCS_RS env in build.rs and options for a docs.rs-specific feature flag).

Ah, that's unfortunate, but not unreasonable. I probably should have checked that first.

Anyway, given that, I'll stop working on the automatic download approach.

Good point! Perhaps let's table that for now, and see if the crates.io path is feasible.

I think I'll expose a bare minimum interface for downloading through the CLI at least as a short term solution. I can re-use most of the automatic downloading logic. It's mostly that I'm hesitant to put a lot of effort into a full rustup-like stable interface just yet.

But if the crates.io path pans out, I'm definitely open to trying to make that work.

@not-jan
Copy link

not-jan commented Jun 17, 2024

Hi, I've been looking to improve the MacOS development experience with Rust. What is the status of this issue at the moment? From what I've gathered the opinion of large prebuilts on crates.io is leaning towards no.

@madsmtm
Copy link
Owner Author

madsmtm commented Jun 17, 2024

No status update on this specific issue, we're still somewhat stuck on figuring out a distribution mechanism for prebuilt LLVM binaries.

Personally I'm leaning towards a combination of: system-provided when possible (e.g. integrate with the package manager's version of LLVM), provide prebuilt binaries of header-translator instead (through e.g. cargo binstall), maybe download from Rust's own CI (if they build enough of LLVM for it to be usable for us), maybe llvmup (though I'm uncertain that'll be accepted by the wider community), or build from source as a fallback.


In any case, I'd suggest you open a separate issue, this one is quite large, and it can be hard to track what in particular you are having trouble with regarding the MacOS development experience with Rust.

@silvanshade
Copy link
Contributor

Personally I'm leaning towards a combination of: system-provided when possible (e.g. integrate with the package manager's version of LLVM), provide prebuilt binaries of header-translator instead (through e.g. cargo binstall), maybe download from Rust's own CI (if they build enough of LLVM for it to be usable for us), maybe llvmup (though I'm uncertain that'll be accepted by the wider community), or build from source as a fallback.

I have some (new) thoughts on this since the last update I posted about the cxx-* and llvmup related work.

llvmup

First, I think that relying on system-provided is going to be difficult for several different reasons:

  • versions will have to match exactly (because the API is unstable)
  • some features will be build-time configuration dependent (and distro provided are often minimal)
  • integration with the ClangImporter API as it currently stands requires building from Apple's fork

I think it would certainly be fine to include detailed instructions on how to use system-provided but I think if that's the default option many people will probably encounter too much trouble to even get started.

Also I would note that I did kind of explore this approach early on when I was looking at building around vcpkg, but it just ended up being too complicated and still quite slow to get everything configured and installed.

As for llvmup, it really serves two separate purposes:

  1. It's a portable and modular distribution of the necessary components
  2. It leverages the LLVM CMake definitions to discover the build graph details necessary to properly link the LLVM libraries

The JSON manifest files that ship with the llvmup tarballs contain this build info (2) which has all the details like library names, build options, dependencies (for topological sorting in build.rs), etc.

So even if another distribution option were used, the information from (2) is still necessary to smoothly build the cxx-* crates, and unfortunately it's not fully portable across LLVM installs. (Theoretically it might be possible to reconstruct after the fact if the distro installs the necessary CMake files, but I'm guessing that would be much less reliable.)

It's been awhile since I updated llvmup, partly because I got kind of burned out on it for awhile, but also because I started spending a lot of time with Nix. (I remember you commenting about that awhile ago).

I'm thinking now that maybe the best option would be to just distribute the LLVM components as a Nix flake. The flake would still use the llvmup machinery for the actual build, so it would be able to provide the build manifest files needed for the cxx-* crates.

This might make the whole thing more niche, but maybe it's worth it. There's already some precedent for this kind of thing and I see more and more large projects are providing flakes. Mozilla has done it for awhile.

My impression anyway is that offering the artifacts via a flake might get more traction than just asking people to download random binaries. And this way they could also build them locally in a reproducible fashion.

The downside is this isn't a great option for Windows (native) users until Nix maybe becomes available there. But we could still provide the raw tarballs in that case since the llvmup builds already work there.

cxx-clang and related crates

As for the rest of the crates providing the Rust interface to ClangImporter, there's been several developments there recently.

I revamped moveref and added full documentation and test coverage.

I also completely redesigned cxx-auto but haven't updated the repo yet. Previously, the way it worked is it used a combination of C++ template metaprogramming and code generation with syn to automatically derive a ton of boilerplate for the C++ bindings.

This worked well with regard to the end result, but the problem is that it was really slow. It required creating an entirely separate pre-build crate to interface with the results from the C++ template metaprogramming, so that in the final crate's build.rs, cxx could use that to generate the rest of the code needing those definitions.

So that required in practice building 2-3 separate crates, two of them with their own build.rs, and two separate phases of invocations with cxx. Oh, and it also required declaring parts of the boilerplate definitions in these JSON manifest files which all needed to be read during build time (with Serde). So, yeah... slow.

The way it works now is that I've managed to eliminate the need for the separate pre-build crate by instead, during the final crate's build.rs, invoking cc to build the C++ template boilerplate as a shared library, which is then loaded with libloading to fetch the details needed to generate the rest of the boilerplate, and then finally invoke cxx as normal.

There's also no more JSON manifest files needed. All of the boilerplate configuration is done in the C++ template files.

I haven't been able to fully test it yet but the individual parts are mostly working now and it seems fast.

The other major issue with the cxx-* crates is that iterative development started becoming quite slow in terms of compiling the actual crates even after all the build.rs magic.

I believe this is basically due to the fact that each of the binding files needed to import several LLVM headers, which repeatedly pulls in huge amounts of code for the C++ compiler to deal with.

I recently refactored the C++ interfacing code to use C++ modules instead which I think should alleviate this once everything is up and running again. They behave basically like precompiled headers: we can wrap the individual headers in module interfaces which only need to be parsed and compiled to IR once and then loading is much, much faster everywhere else they are used. LLVM itself also has some sort of support for building directly using modules, which could potentially make things even better in this regard, but I haven't experimented with that yet.

Both clang 18 and GCC 14 have good support for modules now so this seems viable. The one complication is that building C++ code that uses modules requires compiling the source to object files in the correct order, instead of the current status quo where the order doesn't matter.

Unfortunately cc doesn't know anything about modules. So to fix that situation, I've been developing cpp-deps which consists of p1689 for parsing C++ module dependency files, and cpp-deps which will be a library that can drive cc to compile the source files in the correct order.

The p1689 parser is complete and the cpp-deps module compiler driver is almost complete.

Once that's done, I plan to update the cxx-* crates to use all this new stuff, and then update llvmup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-framework Affects the framework crates and the translator for them enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants